HomeSymposiaEasy to Learn, Hard to Master: The Challenge of Intelligible AI in...

Related Posts

Easy to Learn, Hard to Master: The Challenge of Intelligible AI in French Administration

Reading Time: 7 minutes
Print Friendly, PDF & Email

In the pre-digital era of French administration, the obligation of the administration to give reasons for its decisions under Article 41 of the Charter was primarily associated with the legal and factual justification of individual administrative decisions made by human bureaucrats. However, with the adoption of the law for a Digital Republic in 2016, the French legislator aimed at promoting automation of administrative decision-making process, employing artificial intelligence as a support tool or even a substitute for human intervention. This ambition introduced a new dimension to the right to good administration: among other reasons, the administration must now also account for AI’s contribution to the final decision.

Indeed, we can already observe a connection between the use of AI algorithms and its implications for fundamental rights in the French Data Protection Act of 1978. For instance, its Article 1 declares that information technology must not infringe upon human identity, human rights, privacy or individual or public freedoms. The Act was amended in 2018 to specify the application of the GDPR and use the margin of manoeuvre it provided. On this occasion, the new Article 47 of the DPA allowed administrative individual decisions solely based on the automated processing of personal data. However, the data controller must ensure his “mastery” of the algorithmic processing and its evolution, enabling him to provide an explanation to the data subject in detail and in an intelligible form. The ability to provide an intelligible explanation became a requirement for the lawfulness of fully automated administrative decisions.

In this sense, it is noteworthy how the French legislator emphasized the dual nature of the obligation to give reasons for administrative decisions: to provide reasons, administration must be capable of understanding and articulating them. Consequently, Article 47 of the DPA differentiates between the public official and the person to whom the decision is addressed. The former must comprehend the underlying logic of AI because the latter has a right to an intelligible explanation. In practice, their degree of comprehension is never equal: the bureaucrat is usually better informed, and the individual is often kept in the dark. Therefore, it would be worth investigating how this principle works on both sides.

Right to explanation depending on AI’s contribution to the individual decision

If in the GDPR’s case the very existence of a right to explanation of automated decision-making is a matter of discussion (Watcher et al. 2017), French administrative law explicitly enshrines the right to a personalized explanation of algorithmic outcomes (Rochfeld 2018; Edwards & Veale 2018; Veale & Brass 2019). More specifically, the French Code of relations between the public and the administration, in its Article L. 311-3-1, stipulates that the individual concerned should be informed about the use of algorithmic processing when an individual decision is made based on such processing. Upon request, he must also be provided with the rules defining the processing and the main features of its implementation. The explanation should encompass the degree and mode of contribution of algorithmic processing to the decision-making process, the processed data and its sources, the processing parameters and their weighting as applied to the person’s situation, and the operations carried out by the processing. All this information should be conveyed in an intelligible form and should not infringe upon legally protected secrets.

This article covers non only fully automated administrative individual decision-making using personal data, like Article 47 of the Data Protection Act, but also every individual administrative decision involving AI. One might assume that these guarantees are utopian: they could either obstruct the use of more or less sophisticated AI by the administration or simply be ignored. Indeed, the French Constitutional Council ruled therefore that “algorithms that can revise the rules they apply themselves, without the control and validation of the data controller, cannot be used as the sole basis for an individual administrative decision.” Thus, it virtually banned machine learning techniques for such use.

Hence, the French administration has opted for a more flexible approach. In broad strokes, the right to access the algorithmic logic must be guaranteed as much as possible, but its extension is variable and depends on the algorithm’s contribution to the individual decision (De Minico 2021).

As we have observed, fully-automated administrative decisions are subject to the most severe limitations : not only does this exclude the use of machine learning or any other  algorithm whose operating principles may not be disclosed due to legally protected secrets, but it must also comply with Article L. 311-3-1 on pain of nullity. Expectedly, there is no example of a fully automated decision-making system, at least to date.

At the same time, we can refer to a French algorithm involved in semi-automated decision-making, namely Parcoursup. Designed to collect and manage the assignment preferences of future students in French higher education, certain parts of the algorithm contribute to evaluating student files. In order to ensure the necessary protection of confidentiality of educational team deliberations, the Code of education excluded the application of the general rules provided by Article L. 311-3-1. Instead, it replaced it with the possibility of obtaining, upon request, information regarding the criteria and procedures for examining the applications, as well as the educational justifications behind the decision. Consequently, the decisive part of the algorithm wasn’t truly disclosed. The Constitutional Council deemed this measure consistent with the Constitution under the stipulation that rejected applicants can “be informed of the ranking and weighting of the various general criteria established by the institution, as well as the details and additions made to these criteria for evaluating enrolment preferences.” For the Council, limiting the right to explanation is justified by the secrecy of deliberations and the prominent human role in decision-making.

Another notable example is the recent authorisation of algorithmic processing of images collected through video-protection systems and drones in and around venues hosting the 2024 Summer Olympics. According to Article 10 of the law relating to the Olympic Games, the sole purpose of this processing is to detect, in real-time, predetermined events that could indicate or reveal the risk of terrorism and promptly report them to the appropriate authorities. The lawmaker intentionally dissociated it from any form of individual decision-making, positioning it more as a predictive policing tool aimed at focusing the attention of public servants.

The planned machine learning system won’t make any individual legal decision, nor will it support or serve as evidence for future decisions. Furthermore, no facial recognition technique may be employed, resulting in impersonal outcomes. These limitations of the algorithm’s contribution allowed the lawmaker to significantly limit the right to explanation of AI’s logic, making it nearly non-existent. The only guarantee provided by the law is that the public should be informed in advance of the use of such algorithmic processing unless circumstances dictate otherwise.

Ultimately, many individuals may not be informed that they were analysed by an AI even after they had been subject to an individual decision initially guided by this algorithm. At first glance, this might appear as a failure to meet the administration’s obligation to provide reasons for its decision because a significant part of these reasons remains hidden. In fact, the French administration attempted to shift the focus from the explanation of AI’s functioning to reasoned human decision-making. Thus, ground-level public servants become responsible for their “mastery” of the algorithm they use and, consequently, for the final decision.

Obligation to “master” the algorithm when used by administration

A crucial subtlety within Article 47 of the DPA lies in the fact that it mandates the administration to ensure its « maîtrise » of the algorithmic treatment. The complex meaning of this word might appear challenging to translate into English. The closest equivalent is the term “mastery” because the French lawmaker meant a simultaneous obligation to understand the functioning of an AI system and to control it; to possess the necessary skills and discretion to use it and, where appropriate, to challenge its outcomes. Despite criticism of the phrasing (Duclercq 2019), the primary message is that human decision-makers should be trained and familiarized with the inherent logic of each algorithm they employ. The goal isn’t to make them understand the entirety of the encoded instructions, but to acquaint them with the algorithm’s workings to the extent necessary for comprehending the implemented policy and fulfilling their role in the process. Depending on the situation, a public official should either explain the algorithmic decision in an intelligible manner to the recipient, critically interpret an algorithmic outcome before making his own decision, or monitor its functioning to identify anomalies and abort erroneous actions.

Although Article 47’s provisions concern fully automated administrative individual decision-making using personal data (which is quite limited), the principle of “mastery” concerns every case of AI use in public administration, including semi-automated decision-making. To a certain extent, the obligation of “mastery” complements the obligation of the administration to give reasons for its decisions. If the public official understands how an AI system works, he must be capable of explaining the final decision; if he controls the algorithmic outcomes, he is responsible for the reasons underlying the decision.

To illustrate this point, let’s revisit the example of the Olympics surveillance algorithm examined in the previous section. Even in this scenario, “mastery” over the system is imposed by the legislator and highlighted by the Constitutional Council’s related decision. For instance, the algorithm is subject to scrutiny at every stage of its lifecycle. More specifically, the Council stipulates that “…throughout their operation and particularly when they are based on learning, the employed algorithmic processing must allow verification of the objectivity of used criteria and the nature of processed data, as well as incorporate human control measures and a risk management system designed to prevent and correct the occurrence of any bias or misuse. (…) [T]he development, implementation, and potential evolution of algorithmic processing remain permanently under the control and mastery of human beings.”

“Mastery” is imposed not only in terms of control but also as obligation to understand the AI’s functioning. As a reminder, the system is designed to detect and report predetermined events that could indicate or reveal the risk of terrorism using CCTV and drones. On the one hand, the algorithms’ capacity to extract and analyse information from these sources far exceeds human limits. Hence, one justification provided in the bill’s impact study is that “live viewing of all images captured by video protection cameras is materially impossible.”

On the other hand, the legislator directs the government to exhaustively list these predetermined events in the decree. The Constitutional Council specifies that the regulatory authority must ensure that the events can be detected without relying on AI techniques or data, and the processing operations shouldn’t automatically link with other personal data related to the physical, physiological, or behavioural traits of an individual. Broadly speaking, the event should be identifiable by a human eye on the ground. Eventually, the decree included trespassing, crowd movement, excessive density of people, presence of a weapon, an abandoned object, an individual on the ground, and finally, a fire outbreak. Admittedly, all these events could be detected without algorithmic assistance.

The rationale behind this requirement is that the official has to determine whether the system’s notification is justified and, if it is, call for an intervention. To do so, he must see a connection between the observed phenomenon, a labelled event and the risk of terrorism associated with this event. In other words, for an algorithmic inference to be accepted, a human decision-maker must at least suspect a mechanism linking an input to the conclusion (Sileno et al. 2018). Technically, a machine learning system could analyse thousands of subtle details such as facial expressions, weather conditions, lighting, etc. and link them to an abstract “risk of terrorism.” However, in that case, achieving the sought “mastery” and eliminating eventual biases would be impossible. Thus, the official’s comprehension of the AI is considered a prerequisite for reasoned decision-making.

Alexandre Stepanov
PhD candidate in public law and a Teaching and Research Fellow (ATER) at the University of Lorraine, France.

Alexandre Stepanov is a PhD candidate in public law and a Teaching and Research Fellow (ATER) at the University of Lorraine, France. My current research focuses on studying "Algorithmic Administrative Act" - a comprehensive model of automated administrative decision-making within French legal framework.

[citationic]

Featured Artist