HomeSymposiaThe problem of automated uncertainty in administrative decisions

Related Posts

The problem of automated uncertainty in administrative decisions

Reading Time: 7 minutes
Print Friendly, PDF & Email

This post is based on the article “Automated Uncertainty: A Research Agenda for Artificial Intelligence in Administrative Decisions”, forthcoming at the Review of European Administrative Law.

1. Introduction

Administrative decision-making is a complicated thing. Every day, frontline administrators make many decisions that affect various aspects of individual lives. Some of these decisions affect primarily a single individual (such as the decision to grant or deny entry to a country). In contrast, others (such as the allocation of police officers) might have broader effects. In both cases, however, administrators must operate within an increasingly complex social world in which information from various online and offline sources might be relevant to the decision. All this complexity not only creates additional work for administrators but also generates uncertainty about whether all relevant factors were taken into account in an administrative decision.

To address the perceived lack of clarity about the relevant facts for decision-making, administrative bodies in the European Union (EU) and its Member States are turning to artificial intelligence (AI). By quickly processing large volumes of data, the mathematical models powering AI systems promise to reduce complex problems to ‘closed worlds’ in which all relevant factors can be known and acted upon. In this post, however, I argue that using AI systems can also introduce uncertainty in decisional processes.

This automated uncertainty stems from the techno-scientific uncertainty surrounding the inner workings of AI systems, and it affects both the information supplied to administrators and the application of the law in administrative work. After presenting the sources of automated uncertainty in the public sector, I discuss how administrative decision-makers can respond to it in their decision-making processes. But, as I show, these responses are inherently limited, and so automated certainty becomes another factor that the public administration must consider in its evaluation of whether the use of AI is compatible with the right to a good administration.

2. Certainty and uncertainty in public sector AI

Various factors make the use of AI tempting for public administration. Proponents of public sector AI often point out that the capabilities of modern AI systems can speed up administrative work, make up for the lack of personnel, or even lead to more effective discharge of duties. They would do so by processing the large amounts of data available to public sector entities and quickly extracting relevant information that would otherwise be missed or demand much time to retrieve. If and when that promise holds, AI gives administrative bodies a clearer picture of the facts. Access to a better view of the facts would, in turn, allow the administration to make better decisions.

Increased certainty is undoubtedly a boost for administrative decision-makers carrying out their work. Indeed, AI systems seem to deliver that promise in many cases, as shown by the proliferation of unproblematic uses of AI in European administrations. However, trust in the quality of AI outputs is not always warranted. Because AI systems are complex technical objects, administrators and other external observers can seldom understand their inner workings. And, sometimes, this opacity is used to mask the lack of scientific soundness of the outputs produced by an AI system. For example, many applications of computer vision purport to use images to make inferences about an individual’s personality traits, capabilities, or behaviour, thus giving a veneer of rationality to long-debunked pseudoscientific claims. Under such circumstances, the impossibility of scrutinizing how an AI system operates creates scientific uncertainty about whether the algorithmic outputs provide a valid portrait of the world.

Algorithmic opacity also introduces a second form of uncertainty, this time concerning the rules applied by an AI system. In some applications, AI systems are expected to not only deal with data but also to apply legal rules. As an example, Amanda Musco Eklund has shown how the ETIAS system applies criteria and screening rules, authorizing travel to the EU in the absence of a hit and triggering a two-step manual review otherwise. But, if the inner workings of an AI system are not visible to external observers, it becomes difficult to establish what rules were applied by an AI system. This issue is particularly salient in machine learning systems, designed to learn correlations from their training data rather than to apply pre-defined rules. The use of AI systems in functions that require the application of legal rules may thus introduce uncertainty about the facts under consideration and the content of the rules being applied in the first place.

Further uncertainty emerges from the organizational context in which AI is used. Public-sector AI efforts tend to be large in scale to leverage the data available to the administration, but large AI systems are even more susceptible to the factors identified above. In addition, public-sector AI is often used in high-stakes contexts—such as border control decisions, automated cartel screening, or fraud detection—which means that any errors stemming from uncertainty are likely to cause considerable harm to natural and legal persons. Finally, the internal rules and procedures of administrative bodies may create obstacles to understanding the precise role of an AI system in the decision-making chain. For example, it might be difficult to identify whether an AI system’s outputs are intended as decisions that produce effects on natural and legal persons or merely as sources of information for decisions made by humans. The technical black box of AI systems is compounded by various forms of opacity that exist in administrative bodies.

The actual impact of these sources of uncertainty will depend on the specific conditions in which an AI system is developed and used. Some techniques might lead to less opaque systems than others,  while some organizations are especially prone to the non-technical sources of uncertainty mentioned above. By referring to these multiple forms of uncertainty under the singular term of automated uncertainty, I want to emphasize two things. First, these forms have a shared origin in the scientific uncertainty about whether and how AI systems can deliver the expected results. Second, an uncritical use of AI may introduce uncertainty and, in doing so, lead to worse decisions or obstacles to accountability in administrative decision-making. AI is not an unmitigated good when it comes to certainty, and any assessment of AI in the public sector must consider whether these potential sources of uncertainty dilute or even eliminate the expected gains from automation.

3. The role of administrators in managing automated uncertainty

Automated uncertainty can be problematic to many actors. Citizens affected by decisions involving AI systems might find that opacity prevents them from effectively contesting outcomes. Courts might find that automated uncertainty affects their assessment of how the administration handled the facts of the matter or even the identification of who is responsible for a decision in the first place. Discussing all these perspectives would not be feasible, so the remainder of the post will focus on a group that is sometimes overlooked: the administrators who must operate or otherwise use AI systems.

The introduction of AI systems into administrative workflows can take two forms. In some cases, AI is used to support human administrators in their work. An AI system may carry out tasks such as providing risk scores that aggregate information present in a large data set or suggesting potential courses of action while leaving the final decision in human hands. Alternatively, an AI system might be used to entirely replace human decision-making, for example, by assessing whether fraud occurred in a particular case. Even in the latter case, however, administrators are not fully removed from the scene, as requirements such as those in Article 22(3) GDPR often impose human oversight requirements for decision-making systems. Consequently, administrative decision-makers are likely to need to evaluate the operation of AI systems, even if automated uncertainty creates obstacles to that.

Automated uncertainty is a potentially formidable challenge for administrators attempting to control the challenge of AI. Suppose an administrator cannot be certain of the factual accuracy of AI outputs, or of the role played by AI in a decision, and they cannot directly inspect the system. How can they be expected to ensure the quality of the ensuing decisions? Because of these difficulties, some authors have gone as far as to argue that the use of AI amounts to a shift in the exercise of discretion: the content of decisions is no longer determined by administrators, who lack the means to alter a system, but by the software developers who define how an AI system will operate. Such a view provides the important warning that AI governance must attend to the design of AI systems, but I believe it overstates the helplessness of administrators.

As they interact with AI systems, administrators remain bound to observe the various duties entailed by the right to a good administration. Whenever AI systems are used in the application or enforcement of EU law, be it by EU bodies or by the Member States, these duties stem from Article 41 of the Charter of Fundamental Rights. In strictly national contexts, the Charter itself is not applicable, but the constitutional traditions of the Member States recognize various facets of the right to a good administration, such as the administrators’  duty to give reasons for the decisions they make. The performance of these duties is unavoidably transformed by automated uncertainty, which blocks the scrutiny of an AI system’s decisional procedures. But, if suitably interpreted, these duties can help keep automated uncertainty in check.

For example, it is often argued that the duty to give reasons becomes impossible to discharge if one cannot explain the inner workings of an AI system. But, if obliged by law to use an AI system they cannot explain, administrators can still provide reasons at a different level of abstraction. They can explain whether and how they accounted for uncertainty as they used the outputs of the AI system, and they can explain the role that these outputs play in the final decision under their responsibility. None of these reasons dissipates automated uncertainty, but they allow for the ex post assessment of whether the measures adopted by the administrator ensure the reliability and sufficiency of AI outputs for the final decision. Therefore, they enable the controlling functions that justify the existence of a legal duty to give reasons.

Similarly, administrators might be able to evaluate the quality of system outputs even when faced with automated uncertainty. Consider an algorithm used by police authorities to assess individual risk of criminal behaviour. If such a system ascribes a high risk to a toddler, it seems highly unlikely that the system has detected a precocious criminal mastermind; instead, the result is more likely to be an error or a sign of identity theft. Not all errors by an AI system will be that egregious, of course, and detecting subtler failures will likely require a concerted effort with technical experts. But an attentive administrator familiar with their work domain will be able to spot manifestly wrong outputs and thus discharge some of their duty of care.

This is not to say that frontline administrators can entirely solve automated uncertainty. Even a diligent bureaucrat cannot do more than look at the AI system as a black box. And, ultimately, policymakers are the ones who have the power to determine how much automated uncertainty is acceptable in a particular application and to take systemic measures to address uncertainty at its root causes. Nonetheless, procedural duties can still guide administrators as they seek to address the effects of the uncertainty that could not be removed through systemic measures. Automated uncertainty is not the death of decision-level controls, but an invitation for rethinking them as part of a broader support network for good administration.

Website | + posts

AI regulation PhD researcher @EUI, working on the relationships between the law and software architectures. Resident mustelid enthusiast.

[citationic]

Featured Artist