Much ink has been spilt around algorithmic opacity and the lack of algorithmic accountability (see, for instance, Pasquale, Burrell, Diakopoulos). The potential of machine learning (ML) systems to generate biased/incorrect and unintelligible outputs that influence individual public sector decisions with significant consequences has been documented in numerous instances (COMPAS, the Dutch childcare benefits scandal, the UK visa streaming tool, etc.). In this context, while artificial intelligence (AI) systems are increasingly being used for highly consequential decision-making in a regulatory vacuum (for the time being), important efforts have been made in re-configuring existing legal (and other) safeguards in the algorithmic context. And although a significant portion of these efforts in Europe has (reasonably) revolved around data protection law (GDPR and less frequently LED) as a framework regulating all personal data processing, including by AI algorithms, much less attention has been paid on long-standing principles of procedural fairness that are the very bedrock upon which our societies are built. Contributions in this area have only recently started to emerge (see, for instance, Demkova, Fink and Finck).
Against this background, this blog post examines how two general principles of EU law pertaining to procedural fairness, the administration’s duty to give reasons and the principle of equality of arms during judicial review, can be re-interpreted in the digital era based on CJEU case law. In this respect, this contribution revisits older and more recent case law, including the Ligue des droits humains judgment in an attempt to derive legal standards for transparent and accountable use of AI in individual public sector decision-making.
The duty to give reasons in the digital era
The CJEU has paid significant attention to the duty to give reasons as a cornerstone of transparency and accountability of administrative decision-making. Jurisprudentially derived from the general principle of effective judicial protection (Craig), it is currently codified as an essential element of the fundamental right to good administration under Article 41 of the EU Charter of Fundamental Rights (EUCFR). It ensures transparency by obliging the administration to justify its decisions based on both law and facts, and guarantees accountability by facilitating the exercise of judicial review through an assessment of the legality of the decision’s grounds. According to the Court’s case law, the duty to give reasons requires a statement of reasons that must disclose the legal and factual grounds on which the decision was based (see, in this respect, Craig, pp.1139-1141). Its content “must be appropriate to the act at issue” and present “in a clear and unequivocal fashion the reasoning” of the competent authority (Sytraval, para. 63). In the context of individual decision-making, the reasons must be “sufficiently specific and concrete to allow the person concerned to understand the grounds of the individual measure adversely affecting him” (PI, para. 57). In R.N.N.S and K.A, a case concerning the rejection of two visa applications on the ground of risk to public policy, internal security, or public health, the Court held that summary justifications based on pre-determined lists of grounds did not meet this threshold of specificity. On the contrary, the statement of reasons must indicate not only the applicable ground, but also, the essence of the reasons of the refusal.
In Ligue des droits humains, the Court further specified the content of this obligation in the context of using AI for automated processing of flight passengers’ PNR data (including via profiling). Although this judgment primarily concerns the field of criminal law enforcement, the interpretation provided by the Court may have wider implications for public sector decision-making, especially in the Area of Freedom, Security and Justice, in so far as it concretizes the application of this principle in the context of automated decision-making (ADM). Of course, sectorial specificities in other areas of administrative decision-making should be taken into account when applying the findings presented here.
In this judgement, the Court took the duty to give reasons one step further by requiring the decision-making authorities to ensure that the person affected by ADM must be able to understand how the system and the assessment criteria work, without necessarily disclosing to him/her what these are during the administrative procedure, so that he/she can “decide with full knowledge of the relevant facts whether or not to exercise his or her right to […] judicial redress” (para. 210). It is worth noting, in this respect, that in many cases individuals might be unaware of the use of ADM by public authorities for examining their case (Palmiotto) for several reasons, such as the existence of human involvement in the decision-making process, due to other applicability limitations of Articles 22, 13(2f) and 14(2g) GDPR (see WP29 guidelines) and/or additional limitations of administrative transparency for reasons of public order, public safety or national security. Bearing that in mind, the Court’s aforementioned interpretation necessarily imposes an obligation on the administration to inform the person concerned, along with the statement of reasons, that their personal data have been subjected to automated processing (where exceptions to the provision of such information under the LED are not applicable). Furthermore, when read in light of the above jurisprudential standards and, particularly of the specificity criterion, the Court’s interpretation also seems to imply not only the provision of meaningful information about the logic of the algorithmic operation in general, but most importantly the explanation of the algorithmic decision in the case at hand, including the types of personal data automatically processed and the rationale of the automated decision in relation to the individual circumstances of the affected person (on different types of explanation, see Wachter, p. 78, and on different methods of explainability, see Liga). In fact, general information about the logic of the algorithmic operation, no matter how meaningful, may not allow the person concerned to understand why their individual circumstances were assessed as they were, so as to formulate a meaningful appeal. Finally, considering that an algorithmic explanation may not suffice as a legal justification (on the difference, see Hildebrandt), in case where there is human involvement in the decision-making process, the duty to give reasons, as analysed above, shall also necessitate the disclosure of the reasoning of the human officer manually reviewing the automated output based on both law and facts, as this would be the most essential element of a legal justification.
The principle of equality of arms
While the duty to give reasons is an essential requirement to enable a person to challenge a (semi-)automated decision, in the context of judicial review, the principle of equality of arms is equally essential to allow them to prove their claims. This principle aims to ensure a fair balance between the litigants by requiring procedural equality, that is, “a reasonable opportunity to present his case – including his evidence – under conditions that do not place him at a substantial disadvantage vis-à-vis his opponent” (Otis, para. 71; similarly Dombo Beheer v Netherlands). The principle of equality of arms, a corollary of the very concept of a fair hearing, is rooted in the adversarial principle which is guaranteed as part of the rights of the defence under Article 47 EUCFR. According to the latter, “the parties to a case must have the right to examine all the documents or observations submitted to the court for the purpose of influencing its decision, and to comment on them” (Varec, para. 45).
In situations where access to evidence may be restricted due to competing public or private interests, such as national security, public order, trade secrecy or intellectual property (IP) rights, compliance with the above principles may require disclosure of otherwise confidential information. In such cases, the reviewing court must strike a fair balance between the competing interests by weighting the value of non-disclosure against the materiality of the evidence at issue for the affected person and the potential interference with his/her procedural rights. In any case, non-disclosure must not affect the essence of such procedural rights, must be limited to what is strictly necessary and proportionate to the aims pursued (ZZ, para 69 in conjuction with Art. 52(1) EUCFR).
In Ligue des Droits Humains, the CJEU clarified that, in the context of ADM, the reviewing court and, except in case of threats to State security, the affected person themselves must have the opportunity to examine all the grounds and evidence based on which the decision was adopted, including the pre-determined assessment criteria and the operation of the program in question (par. 211). This interpretation fleshes out the above principles in the digital era and seems to be establishing a flexible, yet robust transparency requirement in the context of judicial adjudication. Read in light of the Court’s aforementioned case law, this interpretation would signify that any person adversely affected by (semi-)automated decision-making in the public sector should in principle be able to access material evidence to prove his/her claims, including the pre-determined assessment criteria, the algorithmic output in the case at hand and any manual re-examination (if not already disclosed); and the technical and organisational measures adopted by the competent authority to mitigate the risks of ADM (ex. relevant standard operating procedures (SOPs) and administrative guidelines). Regarding the system’s technical specifications, the affected person should be allowed to access non-confidential information, and the ‘essential content’ of information protected under trade secrecy or IP rights, in a way that ensures the developer’s commercial interests and the affected person’s right to effective judicial protection (mutatis mutandis, Antea Polsk, paras. 66-67).
In cases of threats to state security, the Court’s interpretation in Ligue des Droits Humains seems to imply that access to confidential information may be restricted to the affected person but not to the reviewing court. As earlier mentioned, even in such cases, restrictions of information disclosure must not affect the essence of the rights to effective judicial protection and to a fair hearing, must be limited to what is strictly necessary and proportionate to the aims pursued (ZZ, para 69 in conjuction with Art. 52(1) EUCFR). Furthermore, the CJEU has established specific procedural safeguards to guarantee these fundamental rights. Firstly, the person concerned must be informed “in any event, of the essence of those grounds (on which the decision is based) in a manner which takes due account of the necessary confidentiality of the evidence” (ZZ, para 69). Secondly, the competent court must be able to assess the existence and validity of the state security reasons invoked by the national authority refusing information disclosure, as well as the legality of the decision in question under the applicable legal framework (ZZ, para 58). Finally, the reviewing court shall apply “techniques and rules of procedural law which accommodate, on the one hand, legitimate State security considerations […] and, on the other hand, the need to ensure sufficient compliance with the person’s procedural rights” (ZZ, para. 57, Kadi and Al Barakaat, para. 344). Considering the principle of national procedural autonomy, these techniques and rules will be mainly in the purview of national courts to determine based on national procedural law and may involve, for example, cross-examination of the confidential material evidence by a security-cleared investigator representing the claimant in closed hearings (see, for example, ECtHR, Chahal v UK, para. 131).
Overall, the CJEU’s case law on the duty to give reasons and the principle of equality of arms includes important safeguards for ensuring procedural fairness in the digital age, even if there is still much room for elaboration. In Ligue des droits humains, the Court seems to have favoured a gradual two-tier transparency scheme for public sector ADM in the Area of Freedom, Security and Justice which requires explainability of (semi-)automated decisions at the stage of administrative decision-making; and a flexible yet robust, in my opinion, level of transparency at the stage of judicial review. It remains to be seen how this line of reasoning will be elaborated in future cases, especially after the entry into force of the AI Act and of other instruments of secondary EU law regulating public sector ADM. Until then, there are, in my view, several reasons for optimism about the protection of procedural rights in the era of AI despite the inherent opacity and complexity of such systems.
Alexandra Karaiskou is a Ph.D. researcher at the European University Institute and a lawyer practising in Greece. Her PhD project investigates the human rights implications of AI applications used by the EU for border and immigration control with a focus on the right to non-discrimination. Her research interests focus on the interactions between technology and law, especially from a human rights and public law perspective. Before starting her PhD, she was practising law in Greece, specialising in European and international human rights law. She holds a LL.M. in Comparative, European and International Laws from the EUI, a LL.M. in Human Rights Litigation from the University of Grenoble Alpes (IDEX Grenoble Scholar) and a LL.B. in Law from the Aristotle University of Thessaloniki (Greece).