HomeSymposiaReasons-giving in AI-assisted decision-making: what can we (not) learn from current EU...

Related Posts

Reasons-giving in AI-assisted decision-making: what can we (not) learn from current EU Administrative Law?

Reading Time: 8 minutes
Print Friendly, PDF & Email

Introduction

Traditional (national) administrative law doctrine and European Union (EU) law scholarship have always tended to pull each in their own direction. Traditionally, administrative law scholarship has suffered from an old methodological legacy of conceptualism. Whenever faced with a new development, its instinct is to ask how that development fits into the discipline’s preexisting doctrinal framework, rather than to ask whether the development fits at all, or indeed if the framework will need to be reformed. EU law scholarship has often followed the opposite direction. Whenever faced with a new development, say, the integration of bank supervision or of digital markets, the instinct of Union law scholarship is to focus on what is unprecedented rather than how the development can be best understood if its aspects of novelty are considered alongside aspects of continuity with preexisting laws.

The growing discussion on artificial intelligence (AI) assisted decision-making in EU administrative law should learn from the past habits of administrative law and EU law scholarship. Scholars should neither underestimate how profoundly the rise of AI will change administrative decision-making, nor hastily assume that the kind of challenges brought by AI is entirely alien to EU administrative law as it stands.

One approach could bring more nuance and analytical clarity to the debate. The approach is to ask whether the distinctive aspects of AI-assisted decision-making bear any analogy with aspects of administrative decision-making that we already know to raise specific issues from the lens of good administration requirements under EU law. Put differently: current debates could benefit from asking not whether AI-assisted decision-making itself represents an entirely new reality in (EU) administrative law – it obviously does – but whether the types of problems raised by AI-assisted decision-making are entirely new problems. They are not.

A novel form of administrative decision-making; but not so novel problems of administrative law

The potential of artificial intelligence is immense in the public sector. Soon, from predictive policing to the streamlining of applications for benefits, AI will inevitably become a standard instrument of administrative authorities across the EU and beyond. Yet some of its distinctive features make legal scholarship (and not only) uneasy. The purpose here is to examine whether such features raise problems for the observance of the duty to give reasons and other associated good administration requirements and, if so, whether and how such problems can be remedied, at least in part, by analogy with other problems already known to EU administrative law. It is therefore in order to briefly recall the key features of AI-assisted decision-making, on the one hand (I would emphasise three), and the key requirements of the administration’s duty to give reasons, on the other (I would emphasise two).

First, the use of AI carries the risk of different biases. One is algorithmic bias: the risk that implicit biases affecting the dataset used to train the algorithm will lead to biased output. Another possible bias is automation bias: the risk that decision-makers acritically trust AI outputs by – falsely – assuming that the technologies used are objective, neutral and immune to human error.

Second, there is the black-box nature of algorithms, which generates opacity and unpredictability. The design of AI tools privileges predictive accuracy in their inferences over intelligibility of the tool’s internal functioning to human users. It may not always be possible – not even for expert designers and engineers – to reconstitute how AI tools reached a given conclusion from the data on which they were trained.

Third, precisely because the internal logic of AI-assisted information processing is not always intelligible or accessible, and because of the risks of bias, there must be a “human in the loop”. As the accountability of public administration is undermined if it can simply delegate power away to fully automated decision-making, there must be an opportunity for the intervention of an official who is able to judge the merits of an AI-proposed decision and take charge of the final decision. A human being, put differently, who can make the call as to whether an algorithmic inference should be translated into a final decision that will shape the lives of individuals.

As for the duty to give reasons, two aspects should be stressed. First, the main function of the duty, at least under EU law, is to facilitate judicial review and individual judicial protection (see here, at para 21). This is particularly important in cases where authorities exercise discretion, as the respect for essential procedural requirements is one of few factors of discretionary administrative decision-making that courts tend to be willing to review. After all, statements of reasons lay bare whether the decision-making process was tainted by bias, errors of fact, manifest errors of appreciation, misuse of power, or simply the disregard for (other) procedural rights and requirements. Second, to ensure that the function of reasons-giving as a judicial review tool is fulfilled, CJEU case law sets out several specific requirements. Paraphrasing said case law (see for example here, at para 53), a statement of reasons must disclose, clearly and unequivocally, the relevant grounds of fact and law that formed the basis for the final decision. Perhaps tautologically: a statement of reasons must expose the administration’s reasoning – the arguments that justify why, rather than the causes (such as technological assistance) that explain how, a decision was taken.

A novel form of decision-making; old tools in reasons-giving

To my mind, there are four useful analogies between reasons-giving in AI-assisted decision-making and more familiar aspects of decision-making in EU administrative law.

First, as it is generated to support the adoption of a final administrative decision, an algorithmic inference resembles a preparatory act. To this extent, AI-assisted decisions can be reasoned similarly to decisions taken pursuant to multi-step administrative procedures, where a variety of measures, such as reports, opinions or draft decisions, are taken to pave the way for a final decision. Given that a “human in the loop” must be able to decide differently from an algorithm’s output, the latter can be likened to a non-binding preparatory act – i.e., to a preparatory act with which the final decision is not required to conform. If an authority chooses not to decide in accordance with a non-binding preparatory act, then the latter does not form part of the final decision’s factual and legal basis, and therefore does not need, in principle, to be disclosed in the statement of reasons. If, in contrast, the authority does choose to adopt the preparatory act as a final decision, then the authority’s agreement with the preparatory act, and the reasons for the final decision contained in it, must be disclosed.

In my view, the same distinction must be valid, by analogy, when choosing whether and to what extent a statement of reasons must disclose the use of AI and what decisional outcome the AI tool had proposed. However, the analogy has its limits. Administrative authorities are usually allowed to reason their decisions by adopting referential statements of reasons – i.e., which merely note that the decision is taken for reasons that can be found in the preparatory acts. This practice should not be available in AI-assisted decision-making as, given the black-box functioning of AI systems, the “reasons” for the AI-generated draft decision are not accessible or intelligible to the decision’s addressee – and can therefore not meet the CJEU’s requirement that reasons must be disclosed in a “clear and unequivocal” fashion.

A second analogy concerns the need for the administration to take ownership of its decisions. The use of AI concerns how, not why a decision was taken. The requirement that AI-assisted decision-making must allow for human intervention means that the final decision cannot be reasoned by simply stating that it was AI-generated. The statement of reasons must reveal that the decision-maker seriously considered whether the algorithmic output was relevant in the concrete case. One useful precedent, I believe, exists in administrative procedures where an authority decides based on a draft decision prepared by another authority, but must still make unequivocal that it takes sole responsibility for the final decision. In such cases, the final decision-maker must adopt a statement of reasons which shows that the draft decision was examined carefully and critically rather than simply rubberstamped. This is an important lesson from recent CJEU case law on the duty of the Commission to reason decisions that are proposed to it by the Single Resolution Board (see here). As EU Agencies are constitutionally forbidden by the Meroni doctrine from exercising discretion to make economic policy choices, the statement of reasons must demonstrate that such choices are, in reality, made by the Commission.

A third analogy concerns the judgment of authorities when they exercise discretion in a concrete case. That judgment is conditioned by several good administration requirements, one of which is the duty of care. The duty of care is “the “duty of the competent institution to examine carefully (…) all the relevant aspects of the individual case”. Put differently, the administration must diligently investigate for all relevant facts of the case, and critically and meticulously consider the facts of which it becomes aware. The implications of this duty in AI-assisted decision-making become clearer if one considers two of its distinctive features – namely, the needs to ensure there is a “human in the loop” and to prevent the risk of officials’ automation bias.

A decision cannot be adopted simply because it was proposed by an algorithm. Instead, it can only be adopted if the decision-making authority deemed the proposed decision appropriate after gathering and analysing all the relevant facts of the case. The decision-maker will have to reflect compliance with the duty of care, in these terms, as such constitutes a necessary condition for the validity of the final decision. The statement of reasons will need to show that the decision was taken, not because an automated statistical inference from the training dataset can be made, but because the decision-maker happens to take the view that the inference is pertinent once it has collected and assessed all relevant information to decide on the concrete case.

A fourth analogy concerns self-imposed limitations on administrative discretion. Public administration often adopts general rules that set out how it intends to exercise discretionary powers in future cases. This practice promotes legal certainty, as addressees of the administration’s powers will be better able to anticipate how their cases will be handled.

The adoption of discretion-steering guidelines has a number of similarities with AI-assisted decision-making. First, they are both practical organisation measures adopted internally, within the administration, that help render the exercise of its discretion more predictable and consistent. Second, neither obliges the administration to always decide in a certain way – it is still free to follow or deviate from self-restriction guidelines or from the algorithm’s output. Were this not the case, there would be no room for an autonomous judgment by a “human in the loop” as to whether the AI-generated draft decision is not appropriate for the concrete case. Third, neither can produce externally binding legal effects as such. For self-restriction guidelines or algorithms to have binding legal effects would mean that a decision taken by the administration that differs from the guidelines, or from the algorithm’s inferences, would be illegal. To this extent, I partly disagree with the view some have taken in legal scholarship – namely, Boix Palop – that algorithms are administrative rules: to account for the need of human intervention, they cannot be considered to constitute externally effective administrative rules.

In light of these similarities, one may ask whether the CJEU’s case law on guidelines may offer a useful starting point when determining how the duty to give reasons must be complied with when the administration decides in accordance with, or differently from, an AI-generated draft decision.

In cases where the administration decides in conformity with a guideline, it is under a lower burden of reasons-giving. The administration will be allowed to reason a decision by stating that the reasons for it can be found in the applicable guideline (see here, at paras. 83 to 86). This, in my view, cannot apply in AI-assisted decision-making. Due to “black box” unintelligibility and unpredictability in the internal functioning of algorithms, a statement of reasons simply pointing to the algorithm that generated a decision will be leading its addressee to decision-making criteria that s/he will not be able to access, let alone understand.

In contrast, where the administration decides differently from the criteria set out in guidelines – which aim at promoting equal treatment, legitimate expectations and legal certainty – it is under a higher burden of reasons-giving. The administration will need to justify why it departed from the guideline’s criteria in a concrete case (see here, at para. 84). Whether this solution can also apply in AI-assisted decision-making is debatable. On the one hand, the rationale for additional justification of why an algorithm’s outputs are not followed could potentially apply in some scenarios. Namely, the scenario where the algorithm is introduced as official policy to promote objective and consistent decisions, and this generates a generalised legitimate expectation that the algorithm will actually be used. On the other hand, given the inherent risk of algorithmic bias, which may lead to discriminatory treatment, deciding differently from the algorithmic output may sometimes protect, rather than undermine, equal treatment.

Filipe Brito Bastos
Assistant Professor, NOVA School of Law and researcher at CEDIS

Filipe Brito Bastos is assistant professor at NOVA School of Law in Lisbon, where he teaches constitutional law, administrative law and European administrative law. He previously held a position as post-doctoral researcher at the Amsterdam Centre for European Law and Governance, at the University of Amsterdam. He was awarded the degree of Doctor of Laws by the European University Institute in 2018, after defending a dissertation concerning the development of legal principles in the CJEU’s case law that are specific to composite administrative decision-making. Filipe has published his research in journals such as CMLRev, EuConst, REALaw, GLJ, EPL and EJRR. His research interests include Portuguese, comparative and European public law broadly speaking (both administrative and constitutional law), particularly in the context of regulatory administrative law and multilevel administration.

[citationic]

Featured Artist