Facial recognition scanning at train stations; chatbots replicating racial and gender bias; children’s images scraped to train generative AI models; algorithms screening and excluding job applicants based on biased data – these are no longer dystopian possibilities: they are our present.
As artificial intelligence (AI) systems become increasingly embedded in our lives, from the workplace to welfare, education and law enforcement, concerns over their impact on fundamental rights have intensified. These technologies, while promising efficiency and innovation, have also magnified long-standing social inequalities, often reinforcing patterns of exclusion and discrimination under a veneer of neutrality.
The European Union (EU) has attempted to take the lead on regulating AI, positioning itself as a global standard-setter with the adoption of the Artificial Intelligence Act, the world’s first comprehensive law on AI. But is this enough? Can the AI Act truly protect the rights of the most vulnerable? And is the EU doing enough to challenge the unchecked power of Big Tech companies, increasingly shaping our digital futures?
A quiet erosion of rights
As AI technologies increasingly mediate access to jobs, services and information, their potential to harm is no longer theoretical. Automated systems are now making decisions that affect people’s lives in ways that are often opaque, difficult to challenge and rooted in biased data structures. From recruitment filters to risk assessments, individuals from marginalised communities are often disproportionately excluded, not because of who they are, but because of how data defines them.
At the same time, public awareness is growing around the impact of AI on human autonomy and labour. Recent reflections from legal scholars and technologists have warned that the spread of automation may not just displace workers, but reshape how society values human input, reinforcing a model where people are measured only through productivity and compliance. This logic of data extraction affects even the youngest. Children’s online behaviours have been quietly harvested to train AI systems, raising serious concerns about informed consent and developmental harm. EU legal reforms now prohibit exploitation of children’s vulnerabilities in AI design, but the gap between regulation and enforcement remains troubling.
Efforts to create international standards for AI accountability are underway, yet critics argue that soft law and voluntary principles cannot substitute real rights-based obligations. In the absence of binding oversight, the erosion of rights risks becoming systemic, precisely in those spaces where protections are most needed. These systemic risks underscore the urgency of a regulatory response that goes beyond procedural safeguards — one capable of addressing structural power imbalances and ensuring effective rights protection. The EU’s AI Act is an ambitious step in that direction, but its limitations are already being exposed.
The AI Act: a groundbreaking but incomplete step
The EU’s AI Act, formally agreed in 2024 and currently entering its implementation phase, has been praised for introducing a risk-based approach and establishing new transparency requirements for high-risk AI systems. It bans certain ‘unacceptable’ practices, including manipulative techniques and real-time biometric surveillance in public spaces – with exceptions and that’s where the problems begin.
Civil society organisations and human rights defenders have raised serious concerns about loopholes and exemptions. Back in 2023 150 civil society organisations signed a joint statement outlining nine recommendations for how the EU AI Act could foreground people and their fundamental rights. According to their analysis, the AI Act provides considerable flexibility for law enforcement and border authorities to employ high-risk AI systems with limited oversight. These carve-outs risk undermining the very rights the regulation aims to protect. Moreover, the Act falls short of requiring comprehensive human rights impact assessments, especially before deployment. As the Council of Europe’s Commissioner for Human Rights has noted, the lack of binding obligations for public and private actors to conduct such assessments is a major missed opportunity. Without them, it remains difficult to foresee, prevent, or mitigate harms before they occur, particularly for historically marginalised groups.
Other critiques include the absence of robust redress mechanisms for individuals harmed by AI systems, limited enforcement capacity across EU member states, and a worrying copyright loophole that could allow generative AI tools to continue exploiting copyrighted content without meaningful consent or remuneration for creators.
In short, the AI Act is a historic first, but it is not, in its current form, the comprehensive human rights shield it was initially envisioned to be.
Big Tech and the struggle for accountability
Any conversation on AI and human rights must grapple with the overwhelming influence of Big Tech companies. Despite the EU’s growing regulatory efforts – including the Digital Services Act (DSA) and the Digital Markets Act (DMA) – major platforms continue to exert disproportionate control over the digital ecosystem.
In recent years, Big Tech companies have become increasingly effective at influencing and fragmenting Europe’s regulatory landscape, often weakening legislative efforts through sustained lobbying and legal manoeuvring. These corporations are not merely developing AI technologies, they are also shaping the very terms of the debate, including how ‘risk’ is defined and whose interests are prioritised.
The EU’s current regulatory model relies heavily on company self-assessment and post-market monitoring. But can we trust corporations to flag their own harms? Experience with data privacy, online hate speech and disinformation suggests otherwise. Moreover, while the AI Act introduces requirements for high-risk systems, many widely-used generative AI models – including chatbots and image generators – escape the strictest scrutiny. This creates a dual problem: the public believes regulation is in place, while the most powerful systems remain relatively unchecked.
The EU must urgently develop stronger mechanisms to ensure independent oversight, public participation in AI governance, and effective penalties for non-compliance. Without these, the democratic deficit in digital policymaking will persist as will the harms.
Towards a human rights-based approach
What would it look like to truly protect digital rights in the age of AI?
First, we need to shift from a risk-based framework to a rights-based one that places human dignity, non-discrimination, participation and accountability at its core. This means:
- Requiring mandatory human rights impact assessments for all high-risk AI systems.
- Establishing accessible complaint and redress mechanisms for individuals and communities affected by AI harms.
- Ensuring independent oversight bodies with real investigative and sanctioning powers.
- Supporting grassroots and civil society involvement in shaping AI governance frameworks.
Second, the regulation of Big Tech must go beyond market fairness and competition. It must address the structural asymmetry of power, the opacity of algorithmic systems and extraction of personal data as a commodity. Real transparency means public knowledge of how AI systems work, who funds them and how decisions are made.
Lastly, the digital transition must be intersectional and inclusive. AI systems are not developed nor deployed in a vacuum: they are built on data shaped by historical inequalities. If we fail to centre the rights of those most affected, the AI revolution will simply deepen the injustices of the analogue world.
The EU’s AI Act is a milestone, but it is not the destination. As technological systems increasingly shape our lives, the real challenge lies in protecting rights before they are eroded, and centring justice over efficiency. Europe has the legal tradition, political weight and civic momentum to lead. But leadership demands courage: the courage to listen to those most affected, to act beyond market logics, and to place human dignity at the heart of digital governance. Artificial intelligence is not just transforming society: it is redefining power. Who controls it, and on whose terms, will shape the future of democracy itself.
This week we are delighted to publish the second post by Chiara Passuello, the blog’s regional correspondent for Europe. Her previous posts are available here, here and here.
The GCHRP Editorial Team