HomePostsDigital RightsThe need to update the Artificial Intelligence Act to make it human...

Related Posts

The need to update the Artificial Intelligence Act to make it human rights compliant

Reading Time: 9 minutes
Print Friendly, PDF & Email

The European Parliament voted to approve the Artificial Intelligence Act (AI Act) on the 13th of March 2024. This regulation will undergo a review process by the legal team and will need to receive approval from the Council before it becomes law. This discussion relates to the version of the AI Act dated the 13th of March 2024. The draft AI Act states that the goal of the European Parliament and the Council of the European Union is to boost innovation, employment (par. 2), and the uptake of trustworthy AI. (par. 1) The AI Act aims to protect individuals and companies by requiring a “high level of protection” of fundamental rights and adherence to numerous  European Union values. ( par. 1; Art. 1 AI Act). The AI Act does not deliver on the promise to offer a “high level of protection” of fundamental rights. (Art. 1 AI Act)  This post will demonstrate that there is a need to revise the AI Act to bring it into compliance with the prohibition of discrimination in numerous international human rights treaties. It urges the EU Member States and policymakers to examine the draft AI Act for compliance with international human rights law treaties before it becomes law. 

This post uses the case study of organisations using AI to assess student work in order to illustrate this argument. The use of AI for marking student work is chosen as a case study because access to education influences people’s ability to secure a livelihood (par. 56), actualise their potential, make meaningful choices, and reach informed decisions. Therefore, this case study relates to a significant area of everyone’s lives. Since individuals with protected characteristics, such as persons with disabilities, continue to experience inequality in the education context, it is imperative to identify how the AI Act is likely to perpetuate this situation. This discussion illustrates a broader problem of the AI Act, which places insufficient limitations on using AI as a component of the decision-making process in multiple areas. An example of a related area is using AI to screen applicants for employment

1. Current uses of AI in education

Organisations deploy AI for a variety of purposes as part of a decision-making process in the context of education. Such uses include employing AI to mark student assignments. Some countries are already exploring the option of automating the marking of student papers. Chinese schools have been trialing the use of AI for marking student work since 2018. The United Kingdom Department for Education organised a hackathon on the 30th and 31st of October, 2023. The purpose of the hackathon was to establish whether educational organisations can use AI for tasks such as accurately marking exam papers. 

AI-based student marking systems already exist. Robert Stanyon developed an AI-based grading system for subjects in mathematics and science. The University of Birmingham and the University of Liverpool are currently piloting the use of this Graide system. Against this background, a group of school leaders led by Sir Anthony Seldon wrote a letter to the Times. They expressed a concern that the government is failing to take swift action to regulate the use of AI in the educational context. It is worthwhile to remember that in the past the employment of AI in the education context in the United Kingdom created a serious problem. In 2020, the reversed the predicted grades that AI had generated. It used the grades that the teachers had predicted for the students following a public outcry over the use of AI. 

German universities used AI to invigilate students sitting examinations during the COVID-19 pandemic. GFF filed lawsuits against a number of German universities, alleging that such practices violate fundamental rights, including due to processing large amounts of data relating to the students. Given this context, it is necessary to establish in what circumstances the use of AI to mark student work can disadvantage students and in what contexts the employment of such systems should be prohibited. It is crucial to identify the roles of international human rights law and the AI Act in preventing problematic practices. 

2. The use of AI in marking: prospects and the need for bright lines

The use of AI in grading where the assignment to be marked involves giving answers to a set of multiple-choice questions does not give rise to controversy. The records of student answers to multiple choice questions and the template of correct answers both represent a pattern. AI can recognise patterns. Douglas Chai developed an algorithm that could determine whether the answers that the student had circled on a multiple-choice assessment matched the correct answers as far back as 2016. Since AI performs the role of pattern matching, the employment of AI is similar to the use of optical mark recognition scanners, which have been in use for some time. 

When organisations use AI to mark assignments involving students solving mathematical problems and the assignments do not involve students giving answers to multiple choice questions, the application of AI should be treated as being similar to the older technology, which made it possible to recognise patterns. This is the case because when solving mathematical problems, students write sequences of numbers, which constitute interim steps. These interim steps in the calculation enable the students to arrive at a solution. However, such applications of AI are not identical to earlier technologies. Further research needs to be carried out to determine the impact of the use of AI to mark assignments in mathematics on persons with disabilities. In order to mitigate the risks that automation entails, the developers will need to design AI so as to give students partial marks for incomplete solutions to mathematical problems. Likewise, the AI should be designed to not penalise students for listing the interim steps to the solution in a different order. Additionally, AI should not penalise students for inputting data using a different format or for interacting with the system in a particular way. 

Teachers should exercise professional judgment in determining whether the employment of AI is suitable for grading student work in mathematics. For example, the deployment of AI is unsuitable for some assignments where the student develops innovations in the field of mathematics. All students should be able to request a human decision-maker to regrade the assignment, which was marked by AI, without needing to demonstrate evidence of the risk of inaccuracies or errors. The operation of AI necessarily entails a margin of error.

Educational organisations should not employ AI to mark written non-numerical text because marking written answers entails making evaluative judgments. The capabilities of AI make it an unsuitable tool for grading assignments in subjects that involve critical reflection on the material and the provision of answers in non-numerical form. Such subjects include humanities and social sciences. Similarly, some courses in sciences and computer science are not well suited for employing AI in assessment practices. An example is a course in ethics and sustainability of computing. AI lacks the capacity to correctly interpret and attribute meaning to written text. Its use disadvantages individuals from underrepresented groups. The employment of AI for grading text undermines the expression of human diversity. Furthermore, there is evidence that the use of AI for the assessment of text can disadvantage persons with disabilities. Since AI cannot attribute correct meaning to text, its use makes it impossible to evaluate to what extent the student engaged in critical analysis when writing the answer. 

The use of AI can penalise students for exhibiting creativity and abstract thinking. Its use can punish students for proposing ideas that differ from the mainstream. Solange Ghernaouti observes that the use of AI as part of the decision-making processes can lead to eugenics in thought. Because AI recognises patterns, it will treat written answers which do not fit into the pattern or which differ from the average as not being relevant. For this reason, AI is not appropriate for marking written text and work that involves students engaging in critical thinking. 

3. The AI Act in the context of education

In order to protect the fundamental rights of individuals, Article 6(2) of the draft of the AI Act designates some AI systems as posing a high risk. Paragraph 3 of Annex III clarifies that Article 6(2) of the draft AI Act covers the use of AI systems in educational and vocational settings. Paragraph 3 of Annex III of the AI Act encompasses different applications of AI in the education setting. These applications include using AI to determine who gets to be admitted to an educational program, marking student work, proctoring student work, and employing AI to assign students to different schools. 

Article 9(1) of the draft AI Act imposes an obligation to establish and maintain a risk-management system in relation to high-risk AI systems. According to Article 9(2), organisations using AI need to continuously evaluate the risks which the employment of the high-risk AI system poses to the enjoyment of fundamental rights. They need to adopt “appropriate and targeted” measures to address the risks to fundamental rights. However, Article 9(3) weakens the obligation to put in place a risk management system by stating that Article 9 refers only to those risks that can be “reasonably mitigated or eliminated” either through the design of AI or by providing “adequate technical information.” The assumption that most risks can be mitigated through design is problematic. The lack of capacity of AI to attribute meaning to written text and the propensity to disadvantage students with protected characteristics cannot be solved through design choices. Consequently, Article 9(3) considerably limits the protection of fundamental rights in the AI Act by confining the obligation to put in place a risk management system to risks that can be “reasonably mitigated or eliminated.” Article 9(5)(a) further weakens the fundamental rights protections by stipulating that one only needs to mitigate the risks to the extent it is “technically feasible” to do so through adequate design. (p. 105) Since AI cannot attribute meaning to written text, Article 9(4) offers limited protection. 

Proponents of the draft AI Act may, at this stage, point out that Article 9(5)(b) mitigates the harshness of Article 9(2) by requiring companies to implement adequate mitigation measures for risks that cannot be eliminated. Although this is true, Article 9(5)(b) does not completely resolve the problem. This is the case because Article 9(5) allows those engaging in risk assessment to decide that the “overall residual risk” associated with the employment of the AI system can be accepted. This rhetoric of accepting “overall residual” risks is incompatible with protecting fundamental rights. From the standpoint of human rights law, all risks matter irrespective of their magnitude, provided that the deployment of AI has the potential to violate someone’s fundamental rights. Thus, if AI produces 1000 decisions in ten minutes and there is a likelihood that 50 people could be subjected to discriminatory treatment, it does not matter that the problematic decisions constitute only 5% of the total decisions. What matters is that 50 individuals experienced discrimination. 

4.  Why the AI Act is incompatible with the prohibition of discrimination 

The current version of the draft AI Act does not enable European Union states to comply with their international human rights obligations. The prohibition of discrimination in the Convention on the Rights of Persons with Disabilities (CRPD) obliges states to seek the consent of persons with disabilities before subjecting them to the partial or complete automation of the decision-making process using AI. Persons with disabilities can object to the use of AI without the need to request reasonable accommodation. This prohibition extends to the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). Martin Scheinin showed that this requirement to obtain consent from the subjects of the decision-making before being able to employ AI extends to other human rights treaties, including the International Convention on the Elimination of All Forms of Racial Discrimination (CERD) and the International Covenant on Civil and Political Rights (ICCPR). The draft AI Act does not meet this requirement because it lacks any provisions requiring the entities deploying high-risk AI systems to obtain informed consent from the subjects of the decision-making in order to be able to use these systems. 

The General Data Protection Regulation (GDPR) does not address this problem fully. For instance, Article 22 of the GDPR only gives a right to people not to be subjected to “a decision based solely on automated processing.” Article 22 does not cover a situation where a human being oversees the operation of AI and reviews the automated decision before applying it. The Court of Justice of the European Union, in the case OQ v SCHUFA Holding 634/21, held that there is an automated decision if the decision-maker “draws strongly” (par. 40) on the automated decision (paras. 61-62). However, there will be many cases where individuals are not adequately protected even when the decision-maker does not “draw strongly” on the AI-generated output. Human beings lack the capacity to oversee the work of complex systems, including AI. Therefore, the protections in the GDPR are not a substitute for prohibiting organisations from using an AI-generated output to inform the decision-making process without obtaining the consent of the subject of the decision-making. 

A problem with the draft is that it assumes that certain technical capabilities exist, which, in fact, do not. For example, Article 20(1) requires providers of high-risk AI systems to take steps to withdraw the system if it becomes apparent that the system is not in conformity with the AI Act. Article 19(1) obliges providers to keep logs that the AI high-risk system generated for at least six months. Article 14(1) requires that the design of AI allows natural persons to have “effective” oversight over high-risk AI systems and explains that the purpose of such oversight is to “prevent” or to “minimise” the risks of violations of fundamental rights. Meanwhile, Article 14(4)(d) stipulates that operators can decide not to use the high-risk AI system or to disregard its outputs. Although these provisions appear to provide protection, they can have limited impact in practice. There is ample research demonstrating that human beings lack the capacity to oversee the work of complex systems, including AI. Moreover, Article 14(4) limits the scope of this obligation by only requiring that the operator be assigned the task of overseeing AI “as appropriate and proportionate.”

5. Lessons for the European Union regarding the revision of the AI Act

Rather than treating various applications of AI in education as posing a high risk, the AI Act should prohibit some applications of AI. Such prohibitions in the education context should extend to grading text, determining who should have access to an education opportunity, and allocating students to different schools. The AI Act should treat fully and partially automated decision-making processes using AI as posing identical challenges to the protection of fundamental rights. Finally, the AI Act should prohibit the use of artificial intelligence as a component of evaluating and making decisions about an individual without that individual’s prior informed consent. In order to be able to give informed consent to the use of AI, individuals should have a high degree of knowledge about how AI operates. They should know how the technical features of AI can give rise to social harm and to violations of fundamental rights. Finally, they should understand what challenges exist to effectively overseeing the operation of AI and challenging algorithmic decisions.

Tanya Krupiy
Lecturer in law at Newcastle University | + posts

Tetyana (Tanya) Krupiy is a lecturer in law at Newcastle University. She researches how society can govern new technologies in a manner which advances social justice. Tanya received funding from the Social Sciences and Humanities Research Council of Canada to carry out a postdoctoral fellowship at McGill University in Canada. She has published with various publishers including Oxford University Press, University of Toronto, University of Melbourne, European University Institute in Florence, Elsevier and Brill.

[citationic]

Featured Artist