HomeSymposiaThe Adequacy Of The AI Act’s Fundamental Impact Assessment In Cases Of...

Related Posts

The Adequacy Of The AI Act’s Fundamental Impact Assessment In Cases Of High-Risk AI Systems Used By Government Agencies

Reading Time: 7 minutes
Print Friendly, PDF & Email

The European Union’s (“EU”) draft Artificial Intelligence Act (“AI Act”) through its recitals states its purpose as the promotion of human centric and trustworthy artificial intelligence (“AI”) and to ensure a high level of protection of health, safety, fundamental rights, democracy, and the rule of law from the harmful effects of AI systems (European Union Commission, 2023). The AI Act segregates the use of AI systems into two buckets: (a) Prohibited use (such as social scoring, use of AI based facial recognition software in real-time) (Hupont et. al., 2022) and (b) Permitted use. Further, in context of permitted use, the AI Act makes a further nuanced distinction by classifying a series of AI systems to be hight risk AI Systems (“HRAI”) (Hupont et. al.,2023). The deciding metrics for classifying an AI System as an HRAI are twofold: (1) The domain in which the said AI system is to be deployed and (2) the tasks it will carry out in the domain in which it will be deployed. For example: The use of an AI system to group files by an organisation is considered to be a low-risk task (as observed in the use of document management solutions which are widely used by both public and private organisations) (Porter et. al., 2023). However, the use of an AI system to predict the rate of recidivism of persons due for parole  (as observed in the case of Equivant’s (formerly Northpointe) COMPAS software which was in use in the courts of the United States) (Angwin, J et. al., 2016) is a high-risk activity since any bias or discriminatory undercurrents within the AI system can cause an adverse impact on the life and liberty of individuals. The AI Act’s third annex enumerates activities across domains which may be considered high-risk and any AI system performing such activities shall fall in the category of an HRAI. These include the use of AI biometric and biometric-based systems, management and operation of critical infrastructure, education and vocational training, employment, workers management and access to self-employment, access to and enjoyment of essential private services and public services and benefits, law enforcement, migration, asylum, and border control management and lastly the administration of justice and democratic processes.

The entirety of the AI based governance ecosystem is understood through a cumulative reading of the  provisions of the AI Act’s thirds Annexure focused on administration of justice and democratic processes, access to and enjoyment of essential public services and benefits, decision making and enforcement in the matters pertaining to migration, asylum and border control management and AI in biometric of biometric based systems, (Medaglia et. al., 2021).

For the purposes of this blog post, we will focus on the functioning of this AI based High-risk governance ecosystem (“Government HRAI”).  To adequately acknowledge the risks and challenges associated with Government HRAI, the AI Act makes a demarcation between the developers and deployers of Government HRAI, for context, the software firms creating the software component of the AI system are the developers and the government bodies using such AI systems through their offices are the deployers. It is noted that a wide gamut of responsibilities including ensuring that there are adequate risk management measures being undertaken as well as ensuring that adequate impact assessments are being carried out, rests on the shoulders of the deployers. In the context of the Government HRAI, such deployers are the government departments deploying the HRAI (Curtis et. al., 2022). The AI Act lays down the crucial requirement of a risk management system (Article 9) to be implemented by the providers of the AI system for the entire lifecycle of an HRAI (Kaminski et. al., 2023). The principle behind introducing a risk management system for HRAI is to ensure that all facets of the HRAI, whether at the development or the deployment stages, are analysed and the possibilities of risks associated with them are appropriately enumerated, especially since any unforeseeable risks may cause a direct adverse impact on the life, liberty and well-being of the persons who are subjected to the HRAI (“Impact Population”).

Further, in order to supplement the efforts of AI providers in ensuring the protection and upholding of fundamental rights, the AI Act in keeping with its purposes prescribes the deployer of the HRAI to conduct a fundamental right impact assessment (“FRIA”) (Article 29a) prior to putting such HRAI into use. The minimum requirements of the elements which shall be integrated in the FRIA include the following: (1) a clear outline of the intended purpose for which the HRAI is being deployed, (2) the intended geographic and temporal scope of the HRAI, (3) the categories of natural persons and groups which are likely to be affected by the HRAI (these would include specific indications where vulnerable groups are impacted by the use of HRAI), (4) a verification of the premise that the use of the HRAI is compliant with relevant Union and national law on fundamental rights, (5) reasonably foreseeable impact of the use of HRAI on the fundamental rights of the natural persons and groups involved, (6) specific risks of harm likely to impact marginalised persons or vulnerable groups, (7) the reasonably foreseeable adverse impact of the use of the system on the environment, (8) a detailed risk and harm mitigation plan which will be put into force on the deployment of the HRAI and lastly (8) details of the governance system which the deployer will put in place in order to maintain a continued impact assessment that include but is not limited to human oversight, complaint-handling and redressal (Stahl et. al., 2023).

It is evident that the requirement to assess the reasonably foreseeable impact on fundamental rights of deploying the HRAI is a fairly open ended position. The provisions of the AI Act do not delve into the hierarchy of the fundamental rights which must mandatorily be upheld by the deployer of the HRAI neither do they detail the mechanism of assessment which may be appropriate for measuring the impact on fundamental rights of natural persons and groups. In the context of the deployment of the Government HRAI, this article examines the firm requirements of compliance with the fundamental right to good administration as detailed under Article 41 of the EU Charter of Fundamental Rights (“CFR”) (A Husman, 2023). The right to good administration undertakes a three-pronged approach by enumerating a set of sub-rights, these are (1) the right of every person to be heard, before any individual measure which would affect him or her adversely is taken; (2) the right of every person to have access to his or her file, while respecting the legitimate interests of confidentiality and of professional and business secrecy and; (3) the obligation of the administration to give reasons for its decisions. In keeping with these requirements, it is noted that the deployer of a Government HRAI must ensure the compliance with a minimum standard which includes specific provisions to inform the subjects within the Impact Population that they are subjected to a HRAI, secondly (also keeping in line with the provisions of the General Data Protection Regulation) the subjects of the Impact Population must be given an option to opt-out of the use of Government HRAI. Finally, the subjects of the Impact Population must in some discernible formats be given direct access to the Government HRAI in order to observe the metrics which are being used in order to reach a decision and must in turn be allowed to correct information and records within the datasets which are used for processing by the Government HRAI. Finally, the obligation of the government deployer to provide a reasoned explanation to the subject of the Impact Population of the Government HRAI is of principal importance. This can be fulfilled not only through a degree of transparency within the government HRAI but also the requirement to ensure that the explanation issues to the Impact Population is discernible by a person not skilled in the art of how an AI system functions (Fanni et. al., 2023).

In addition to these requirements, the Government HRAI must also comply with the Principles of Natural Justice (“PNJ”). Derived from the expression ‘Jus Natural’ of the Roman law, PNJs may not be necessarily codified under substantive law however have long been understood as the cornerstone of quasi-judicial functions. Over the years, PNJs have been codified as a part of the procedural aspects of administrative laws across jurisdictions. The primary PNJs which have trickled into the procedural laws binding administrative procedure are as follows: (1) The adjudicating authority must not be biased whether in favour of or against the persons seeking legal recourse; (2) Pronouncement of a reasoned order by the adjudication authority; (3) No inordinate delay in adjudication; (4) Ability of a person to make legal representation in front of the adjudication authority and; (5) Adequate notice to be provided to a person to prepare for the legal proceedings initiated against them. There is significant overlap between the provisions of Article 41 of the CFR (Right to Good Administration) as well as the PNJ.  Article 41 of the CFR categorically states that every person has the right to have their affairs handled impartially, fairly and withing a reasonable time by an administrative body established withing the EU. Further, it states that the right to be heard, the right to a reasoned explanation for its decision by the administrative body are core aspects of the right to good administration. A combined reading of Article 41 and the PNJs allows for not only compliance with the Article 41 but also facilitates compliance with other fundamental rights such as the right against discrimination, right to equality before the law etc. (Gaur, 2022). Additionally, to cement the robustness of the government deployer’s FRIA, this article argues that the source code of the Government HRAI must be made a matter of public records. This has been observed in a case at the Trelleborg  municipality where a journalist Frederik Ramel, after several attempts to gain access to the source code of the Government HRAI system used by the municipality of Trelleborg, filed an appeal before the Administrative Court of Appeal arguing that the source code of the software used within the Trelleborg municipality’s Predictive Justice System should be made publicly available as it falls under the Swedish principle of public access to official records. The Court allowed his appeal and upheld his request for access to the source code (Kaun, 2020).

The need of the hour when it comes to Government HRAI is to ensure that the deployers i.e., the government departments responsible for employing such Government HRAI engage in a deep analysis of their HRAI systems and recognise the magnitude of damage which may be caused in the event they operationalise and deploy a biased, inadequately tested, or discriminatory HRAI system. The Impact Population, as has been observed in the cased of the Dutch Government’s discriminatory software- Systeem Risico Indicatie or SyRI which has caused many issues pertaining to targeting and discriminating against communities living in identified low-income neighbourhoods (Newman et. al., 2023) or the COMPAS recidivism prediction software which was deployed by courts across the United States and was found to be heavily biased against black people (Angwin, J et. al., 2016).

This article maintains that the use of HRAI by government bodies must be held to the highest level or scrutiny and must be centred around a multi-stakeholder approach wherein the Impact Population as well as associated groups are allowed to investigate, interact with and where necessary complain against the Government HRAI systems, before such systems can adversely and materially impact the fundamental rights, life, liberty and well-being of the Impact Population, especially since the government deployers are tasked to protect and uphold the fundamental rights and freedoms of their citizens.

Mitisha Gaur
Early-stage researcher with the Legality Attentive Data Scientists Project

Mitisha Gaur is an early-stage researcher with the Legality Attentive Data Scientists Project, funded under the EU’s Horizon 2020 Marie Skłodowska-Curie Innovative Training Networks. Grant Agreement ID: 956562 since October 2021. She is currently based at the Lider Lab at Scuola Superiore Sant’Anna, Pisa (Italy). During her time with the LeADS Project, she researches the use of AI by governments and judicial bodies to augment and perform judicial and quasi-judicial functions. Her work revolves around mapping the regulatory requirements and the realities of technology.

[citationic]

Featured Artist