Summary: In EU law, the fundamental right of good administration is a general principle of EU law and is enshrined in Article 41 of the EU Charter of Fundamental Rights. The challenging question arising from the use of AI systems in public administrations is how the requirements imposed by Article 41 of the Charter fit the case of machine learning algorithms and more particularly in terms of the duty of care imposed as a matter of EU law.
In the EU, decision-making in individual cases by public administrations is framed by EU law, EU general principles and administrative procedures that are essential for administrative justice. More particularly, the principle of good administration and the principles stemming therefrom it shield natural and legal persons from the arbitrary implementation of public power but also help to ensure the proper conduct and effectiveness of proceedings by regulating “administrative discretion and the design of administrative procedures”.
The recent wave of digitisation and datafication in the public sector has created momentum for the use of algorithms that take advantage of vast public datasets to improve governance and, problematised here, the use of “algorithmic decision systems” (‘ADS’). One of the challenging questions arising from the use of AI systems in public administrations is how the requirements imposed by Article 41 of the EU Charter of Fundamental Rights fit the case of machine learning (‘ML’) algorithms and, more particularly, in terms of the duty of care imposed as a matter of EU law.
The Duty of Care in EU Law
The ECJ developed the duty of care in its case law, a subprinciple “inherent” in the principle of good administration. It is a general principle of EU law that applies to all steps of administrative procedures. Its relevance in administrative procedures is so important that, in the very words of the ECJ, “nothing justifies” the administration failing to apply the duty of care.
More specifically, the duty of care asks the administration “to conduct a diligent and impartial examination of the contested measures, so that it has at its disposal when adopting the final decision, the most complete and reliable information possible” and to take into account all relevant information prior to arriving at an administrative decision.
The Duty of Care in Algorithmic Contexts: the challenges ahead
The challenges algorithms pose to public administrations are as significant as the benefits they could bring. More specifically, the deployment of ADS using ML is promising for enhancing administrative decision-making. Predictions generated through ML techniques offer public servants new efficiencies that tap the power of their administrative data. By analysing vast amounts of data — far beyond the capacity of a single public servant — ML algorithms could help public actors make more accurate decisions and prioritise needed policy or enforcement. Yet, while administrative decisions can be automated by ML algorithms or improved in the case of recommender systems, governments are constrained by constitutional and administrative principles. The deployment of AI systems within public administrations may clash with some of the very principles upon which liberal democracies are based. Without addressing the objections to ADS exhaustively, it is critical to highlight some major concerns challenging the duty to care in EU law.
Compliance of ADS with the duty of care should be assessed regarding two important requirements of the duty: firstly, the requirement for the administration to take into account all the information capable of substantiating the administrative decision taken (1); secondly, the administration’s compliance with impartiality and fairness requirements (2). How these requirements fit with ADS is not straightforward. Despite the fact that assessing compliance with the duty of care with ADS is highly contextual, general considerations over ADS can be outlined. Then, a distinction between the challenges raised by fully-automated systems and recommender systems is made (3).
1. Evidence that is factually accurate, reliable and consistent
Finding reliable data to feed ML algorithms brings many challenges if the administration has to ensure it took its decision on factually accurate, reliable and consistent information.
Under the assumption that ADS could technically integrate the relevant provisions of EU law it applies, the case law related to these provisions and factual information related to a specific case, legal issues remain and must be tackled if one wants to comply with the duty of care. Some scholars warn about how the translation of facts into machine-readable data sometimes transforms facts “into a simplistic format” and how ML algorithms are unable to grasp legal issues relating to risk assessment or deal with abstract legal concepts (e.g. the precautionary principle). With this in mind, it should be highlighted that ML algorithms cannot always take into account the full picture of a case. While ML algorithms are certainly capable of processing more data than human beings, the rigidity and the formalised language of code might not be sufficient to meet the legal standards of good administration imposed by the duty of care. The duty of care requires a lot of information to be taken into account for decision-making, and some of that information cannot be quantified or encoded.
Next to the general issue of finding reliable data in training data, an additional challenge emerges in the context of network structures, where it should be highlighted that the latter have mushroomed over the years in the European administrative space. Let us take the example of a composite procedure in the EU, that is to say, a procedure within the scope of EU law where one leg of the procedure is at the EU level and the other leg is at the Member State level. Let us suppose there is incorrect data that could undermine ADS’ accuracy and lead to incorrect outputs; how can public servants using ADS in a composite procedure access data owned by another administration and ensure the accuracy of the data? In the absence of an ability for public servants to review the accuracy of data not owned by their administration, it is questionable how public servants can conduct a diligent examination in algorithmic contexts. While it remains to be seen on a case-by-case basis whether the duty of care has been respected, the use of ADS in network structures certainly adds a layer of complexity for administrations that have to rely on “the most complete and reliable information possible”.
Overall, the issues stemming from finding accurate ground truth with the additional challenges of reviewability at the administrative level of data accuracy in networks raise serious challenges regarding ADS’ compliance with the duty to care.
2. Impartiality and fairness
Another issue stemming from the use of ADS is that it might reduce fairness within administrative procedures. Already in non-algorithmic situations, Rusch notes that “[p]ublic authorities are a party in the procedure and also the judge in the procedure insofar as the procedure is aimed at producing an administrative act, i.e. a decision of the administration”. Consequently, “the principle of impartiality is structurally weakened in administrative procedures”. The risk inherent in the use of ADS is that they might increase the imbalance of power between the administration and administrative subjects.
While sufficient guarantees to exclude legitimate doubts about the partiality of the administration must be offered, it is unclear whether current guarantees are sufficient. One example from the technical solutions offered towards the explainability of AI systems can illustrate the issues raised by ADS. It has been shown that “[d]ifferent explanation algorithms lead to different explanations” and that “even [when] a single explanation algorithm [is used], there can be many different parameter choices that all lead to different explanations”. The potential ability for administrations to choose among several explanation algorithms is problematic under the duty to care. Having the option to choose explanation algorithms might lead to unfair results if the administration chooses an algorithm that best suits its interests. Although the administration might provide the explanation algorithms it used “objective explanation” of its ADS, it raises two questions: first, how can administrative subjects be sure that the choice of the administration is not biased? Second, are the outputs given by ADS fair?
3. Are Recommender Systems More Compliant With the Duty of Care Than Fully Automated Systems?
If an EU administrative procedure has to be fully automated, there is a strong requirement under the duty to care to ensure that fully-automated systems are capable of processing all the information needed for substantiating the administrative decision. It must be kept in mind that human judgments, even those from experts, are riddled with cognitive biases. On the contrary, algorithms are noiseless; that is to say, their outputs will not vary depending on factors such as stress and or fatigue. Some scholars argue that algorithms therefore could reduce human errors because mechanical judgments generally outperform clinical judgments. Since automated decision-making has the potential advantage of being less noisy than human judgments and processing vast amounts of data, it could be argued that theoretically, these systems not only comply with the duty of care requirements but also better comply with the duty than human beings. However, while processing vast amounts of data is undoubtedly important for taking better administrative decisions, most administrative procedures require a case-by-case analysis based on contextual factors. If these factors cannot be encoded, it is doubtful these systems would meet the duty of care’s standards.
Recommender systems might, however, help public authorities to better comply with the duty of care if ADS’ shortcomings are compensated with human judgement. There are several advantages to helping public servants in their decision-making with recommender systems. They could theoretically increase the consistency of public administrations’ decisions positively. Even though recommender systems are not able to grasp all the factors required to assess the final outcome of an administrative procedure, public servants remain in the loop of the decision-making processes and could therefore compensate for the inability of ML algorithms to take into account certain factors, especially the subjective and political ones. Theoretically, human intelligence combined with algorithms could amount to administrative decisions where more information is taken into account than without the use of recommender systems. However, such a claim remains to be seen in practice. The use of recommender systems is not without pitfalls regarding the duty of care, especially if the cognitive biases people have with technological automation are taken into account. The aura of infallibility emanating from technology would cause complacency toward it and, therefore, a decision taken in the context where recommender systems are used could, in some cases, be de facto automatic. In such a scenario, the duty of care standard is unlikely to be met.
Conclusion
While transposing the current EU right to good administration’s case law to algorithmic contexts raises many challenges, the duty of care should be duly taken into account to protect individuals from unlawful use of ADS, especially in situations falling outside the scope of EU secondary law. In these situations, the EU right to good administration will be the only protection against the unlawful use of ADS. The current power asymmetries between citizens and the AI-wielding public administration could lead to unchecked abuses of power, a matter not to be taken lightly because wrongful automated decisions have the potential to erode citizens’ trust in their administration and increase overall political distrust.

Benjamin Jan
Benjamin is a PhD candidate in EU law at the University of Liège (EU Legal Studies). Previously, he worked there as a teaching assistant in EU Law and Technological Innovation. He interned at the European Court of Justice (under AG Hogan), the European Parliament (ALDE) and the European Commission (DG GROW). He holds an LLM in International and European law from the University of Amsterdam, a master’s degree in business law from the University of Liège, and a bachelor’s degree in law from the University of Liège.