HomePostsDigital RightsThe Age of AI: a make-or-break time for human rights

Related Posts

The Age of AI: a make-or-break time for human rights

Reading Time: 7 minutes
Print Friendly, PDF & Email

Subtitle: Review of the book “Artificial Intelligence and Human Rights” (Oxford), edited by Alberto Quintavalla and Jeroen Temperman.


From the very beginning of the book, the editors – Alberto Quintavalla and Jeroen Temperman -, rightfully so, highlighted the simultaneous codification processes of Artificial Intelligence and modern Human Rights. However, it is only recently that we have witnessed a so-called AI hype, and the reasoning is the convergence with HR, otherwise translated into witnessing many HR violations. Concerning this observation, we are seeing a shift from law governing facts to facts or violations of HR governing the law.

The book fills a gap in the literature by extensively covering the interplay between AI and HR, comprising first-, second- and third-generation rights, but also the spot-on observation that ‘the interdependence and indivisibility of HR imply that effects on the one right rubs off on the other.’ Further, it covers both positive and adverse relations. Regarding the latter, the authors paid close attention to the lifecycle of AI systems and identified risks at every stage – design, development, and deployment. The same attention was paid also when it comes to the HR project, and almost all of the chapters address risks and opportunities at the various levels of HR – international, regional, and national.

Another significant contribution of the book is providing an answer to whether the HR framework is the right one to address AI-related issues, risks, and opportunities. And we will get back to what answer the book provides to this question. Let us now zoom into a few chapters of the book.

Facial Recognition Technology: A New Big Brother?

AI systems rely on vast amounts of data to produce their outputs, and in particular, Facial Recognition Technology (FRT) is personal data-driven. The introduction of AI and machine learning (ML) techniques enhanced the ability of FRT to recognize, categorize, identify, and verify/authenticate people in various settings. But what if FRT is misused through mass processing of biometric data? What if it becomes a tool for mass surveillance, an AI-empowered Big Brother?

In the chapter, The Rights to Privacy and Data Protection and Facial Recognition Technology in the Global North, Natalia Menendez Gonzalez argues that the right to privacy and the right to data protection act as ‘corollary to other fundamental rights’, such as freedom of assembly and association, non-discrimination, human dignity and rights of the child. For instance, its application in public demonstrations could be justified by security reasons but can also create a deterrent effect on participants for fear of being targeted or punished. Such a deterrent effect, however, would hinder the exercise of the right to freedom of assembly and association and, in general, the participative pillar in our democracies.

Moreover, various intersectional inaccuracies have been reported, resulting in gender, ethnicity, and minority discriminatory outcomes.

Human dignity might also be undermined by the datafication of our facial features, i.e., the commodification and objectification of people’s physical features. This means, in a depersonalizing vision, treating faces as another element of identification, like ID numbers, which, among other problems, might be exploited by (Big Tech) companies for profit. The problem is the intrinsic sensitivity of facial images, because in case of biometric data breaches, facial images are not interchangeable, replaceable, or erasable like passwords or credit cards.

On such bases, the chapter proposes an insightful overview of the prohibition of biometric data processing and its exceptions, the role of consent, and the use for secondary purposes and/or by third parties.

How AI will impact Persons with Disabilities

Non-discrimination is at the core of the chapter Artificial Intelligence and Disability Rights by Antonella Zarra, Silvia Favalli and Matilde Ceron. The United Nations Convention on the Rights of Persons with Disabilities (PWD) recognises the status of human rights to disability rights. The authors use then this Convention as the legal benchmark against which to assess the opportunities and risks of AI for PWD. Among the opportunities, AI-enabled systems might enhance personal mobility and independence through navigation tools, personalized learning experiences, eye-tracking and voice recognition, and be applied in many fields, including employment, education, housing, and access to services and products.

The dark side is the misuse of biometric data and algorithmic profiling, with significant risks of reproducing and amplifying forms of discrimination. For instance, automated decision-making (ADM) systems may be trained on data sets that do not include information on vulnerable categories, with discriminating results in hiring processes, access to health insurance and other services. Such problems include issues related to privacy, consent, and misclassification and are rooted in the lack of representation. In fact, PWD are often excluded from the design process, starting from the development of the original data sets and models.

The chapter is particularly praise-worthy for its attention to the lack of a unique definition of disability and to the intersection with other characteristics such as gender, ethnicity, economic status, and age. It underlines that disability is not a monolithic concept, and it is often less about physical or mental impairments, and more about how society responds to impairments. It coversnew regulatory solutions, such as the EU AI ACT, points out their shortcomings, and proposes a disability rights-based approach to AI.

AI and the right to a fair trial: a dystopian scenario

Could AI be a more appropriate mouth of the law than humans? That’s what the chapter on Artificial Intelligence and Fair Trials by Helga Molbæk-Steensig and Alexandre Quemy try to foresee.The right to a fair trial is a cornerstone of the rule of law. However, judges are biased humans and courts are often inefficient. AI and ML applications, such as decision support systems (DDS), might assist in overcoming inconsistencies, lack of impartiality, delays.

The authors suggest Ronald Dworkin’s conceptualization of the ideal judge ‘Hercules’ as a model to create a human-AI hybrid that embodies the fair trial principles more closely than humans. But what if it turns out that even these artificial judges are biased and unjust?

As in the case of People With Disabilities, the central question remains the secrecy of the functioning of algorithms and the knowledge gaps among legal professionals using ML software. In other words, what DSS are, doing and what data they base their results on.Depending on the data input, not only may algorithmic decisions be just as biased as human ones, but they could even replicate such biases more consistently and effectively. They may also miss subtle but vital clues or may be modelled on outdated or incomplete data.

A dystopian future appears on the horizon, where individuals might be punished (now) based on the prediction of their future behaviour (something that hasn’t happened yet). This scenario becomes scarier in the case of total reliability on algorithms to create predictions because a computer can’t tell the causation link between two events – as a human would. Another problem is that legal systems are dynamic. New rules and precedents can erase or diminish the power of old rules or precedents, whereas ML algorithms cannot unlearn. Wisely, the authors suggest a framework for dividing labour between humans and computers and outline various attempts to regulate. AI could then be a successful tool to enhance the capabilities of the judges, thanks to cognitive computing, by helping in tasks regarding volume, such as reading and remembering more documents, estimating probability distribution, and categorization of case law and its trends.

An analysis of AI health-related applications

Another AI application field is healthcare. In the related chapter, Enrique Santamaría Echeverría starts from the assumption that AI impacts both the individual dimension of the right to health and its social dimension, which is public and population health. AI could open new opportunities in digital pathology, in the prediction of the evolution of an illness, in personalised treatment, precision medicine, and mental health with bots and virtual therapists, as well as in the prediction of the spread of diseases through public health surveillance. AI could also benefit healthcare management, which would gain in operational efficiency, automated scheduling, and automated clinical decision systems. And do you remember the infamous incomprehensible clinicians’ notes? Yes, AI could solve this problem thanks to language understanding applications.

Unfortunately, AI applications in healthcare present many of the already mentioned risks: a degree of inaccuracy, exacerbation of traditional health disparities and biases, opacity of how AI works and consent (black box problem), threats to privacy because health data are sensitive data. The chapter concludes with a particularly useful overview of policy and regulatory proposals, especially for AI medical devices, and of the steps towards more harmonised health infrastructures.

The right to a healthy environment: what’s the cost of AI?

In the tradeoff between the pros and cons of AI, we cannot forget that AI will have significant environmental costs. Alberto Quintavalla in Artificial intelligence and the right to a Healthy Environment,lists under the label of ‘Earth-Friendly AI’ the enhancement of environmental protection and conservation of natural resources and biodiversity through AI deployment.

AI could predict climate change and natural disasters thanks to the feed of climate data, analyze satellite images and locate oil spills, enable smart agriculture and smart cities, optimize energy consumption, and ease many daily tasks. The other side of the coin are the risks of inaccuracy and lack of transparency of such predictive models, potentially resulting in false alarms and cybersecurity risks, misuse, and overreliance leading to inefficiencies (rebound effect). Above all, the extensive use of AI demands high energy consumption, and, consequently, it is expected to generate a large amount of carbon emissions, because the ICT sector still relies on energy sources that are not carbon neutral. Elements such as energy intensiveness, unsustainability of the computing infrastructures, extraction of natural resources for the deploy of AI-supporting devices are particularly concerning due to the current precariousness of climate change.

From these premises, the author guides us in the analysis of the right to a healthy environment – and its limits- in current regulations and courts. Although the Human Rights Council and the UN General Assembly recognise it as a human right, many obstacles are in its way. Among other shortcomings, environmental concerns are poorly or not at all integrated into non-energy policies; data on energy consumption is lacking; environmental standards are underdeveloped; guidelines on green practices in the development and deployment of AI are scanty. In addition, corporate actors, which are the main stakeholders in AI research, are under no legal duty to observe (environmental) human rights obligations. At present, only soft law instruments are available at the international level, and are deemed unable to mitigate the negative effects. Hence, an internationally binding right to a healthy environment could be a partial solution in the making of a more comprehensive approach to its protection.


As mentioned before, the book contributes by looking at various levels of the HR project – international, regional, and national – but also at the different stages of the lifecycle of AI systems. It concludes with three ideas: (1) there are many potential opportunities as there are risks; (2) it is still all up for grabs, as we have only recently witnessed the adoption of the final drafts of AI regulation at the level of EU and the Council of Europe; (3) lastly, and in our view, a crucial contribution of the book is that it states that the HR project is the proper framework to deal with AI-related issues and risks — a conclusion which we also uphold.

The book represents a strong foundation and invites conversation between academics and legal professionals altogether as we are figuring out how to incorporate technological developments further while trying to answer what we call Q 0: how we would like the world of tomorrow to look like. Without a doubt, understanding and further clarifying the interplay between AI and HR projects is critical to ensuring opportunities for AI and mitigating related risks.

You might also be interested in: Re-watch the discussion with the editors and authors of the book ‘Artificial Intelligence and Human Rights’ 👉 https://loom.ly/4gSBzr8

On the whole, this blog post is the product of joint reflection. However, the introduction and conclusions were written by Anca Radu; whereas the summary of the chapters was written by Anna Ferrari.

Anca Radu
PhD Researcher at European University Institute

Anca Radu is a doctoral researcher in Law at the European University Institute. She examines legal and ethical questions around the design, development, and deployment of AI systems in the judiciary. Her research focuses on the impact of these applications on human rights, democracy, and the rule of law. She is also a Teaching Assistant at the School of Transnational Governance, EUI. Besides her academic activities, she collaborates with international organizations on AI policy and regulation. She currently acts as an independent Scientific Expert for the Council of Europe’s European Commission for the Efficiency of Justice (CEPEJ) and for the UNESCO. As a former Lawyer within the Registry of the European Court of Human Rights, she has solid human rights expertise. She is also a Senior Policy Officer on Human Rights for CAIDP Europe. She has been with CAIDP for one year and a half and has contributed to the educational and policy activities of the Centre.

Anna Ferrari
Project Associate at Centre of a Digital Society at the Robert Schuman Centre for Advanced Studies (EUI)

Anna Ferrari joined the European University Institute in 2023, as Project Associate for the Centre of a Digital Society at the Robert Schuman Centre for Advanced Studies. At the CDS, she oversees the Executive Education programme. She previously worked at the European Parliament in Brussels (until 2022), in different roles as staff member and Parliamentary Assistant, including in a political group and in national delegations (British and German), supporting the work in policy making, political communication and advocacy. She mainly followed EU & national legislation on the topics of environmental protection & climate action, gender equality, digital and democratic processes. She also worked as freelance journalist; in the global NGO sector; briefly in the art sector and in the private TV sector. Ferrari holds a Law Degree (equivalent to Bachelor + Master) from the University of Milan (Università degli Studi di Milano, 2014) with focus on EU law and thesis on cross-border healthcare in the EU member States. She then graduated from the multicultural Erasmus Mundus Master in Journalism, Media and Globalisation with specialism in Politics & Communication (Aarhus University & University of Amsterdam, 2017) and a thesis on young Europeans’ electoral behaviours.


Featured Artist