HomeSymposiaWho told you so? Automation of information by public authorities and EU...

Related Posts

Who told you so? Automation of information by public authorities and EU administrative law: Automated information-providing (“AIP”) by public administrations

Reading Time: 7 minutes
Print Friendly, PDF & Email

In August 2023 the Italian legislator introduced a new rule, precluding small taxpayers from requesting the national tax authority for formal clarifications on a specific case, when they can have access to written answers given by rapid query services, implemented also “through the use of digital technologies and artificial intelligence technologies”. The government will decide if and how to implement this provision, which seems to be the first of its kind, at least for Italy. Nonetheless, the idea of an automated reply on the interpretation of the law is an interesting example of the use of AI by public authorities.

Public administrations already use tools based on AI to interact with citizens all over the world. Instruments such as chatbots and virtual assistants can be employed to help the users in finding the correct information on a website, or to channel the queries of the public to the competent office. However, the most recent technologies allow to automatically provide even more specific and elaborate information. Generative, conversational chatbots may be able to offer clarifications on specific cases and explain rules, adapting their response to different types of users. This is a task usually carried out by public officers and employees, who provide answers to queries, assist in the preparation of forms, check if documents are complete, both in formal and in informal ways. The automation of this “information providing” activity (we can refer to it with the general definition “automated information-providing” – AIP) can significantly help the public administration, allowing it to carry out their functions more efficiently. However, AIP also poses significant questions regarding the quality and the reliability of its output, the rules that need to be followed to employ it and the possible remedies for the users and the citizens involved.

This post discusses briefly (i) how can automated information-providing be considered from a legal point of view, on the basis of administrative law and data protection law, and, in particular, if the rules on automated decision-making contained in the GDPR can be considered applicable in this area; (ii) what principles pertaining to EU administrative law can be useful to provide protection to the citizens and users involved; and (iii) some possible conclusions that can be drawn also in light of the proposed EU Artificial Intelligence Act.


One preliminary question regards the legal nature of the “AIP” by public authorities: what is the nature of a reply of a chatbot or of an instruction automatically provided by a virtual assistant? The answer to this question may vary of course according to the specific characteristics of the AIP: there may be different kinds of addressee (natural or legal persons, individuals or groups), different degrees of formality (text, speech, formal document), different stages of a public procedure (outside, before or into an administrative procedure), different goals (general instructions or guidelines for the public, replies to frequently asked questions or clarifications in response to specific queries). For the sake of this brief analysis, we can identify some general characters of AIP: this regards in principle non-binding information, which is destined to one or more individuals to offer instructions and clarifications on the interpretation of the law or the practice of an authority.

From an administrative law point of view, therefore, digital outputs of AIP are probably not to be considered as rulemaking or (general or individual) administrative acts. AIP lacks, in fact, the specific structure, legal effects and procedures based on rules of law, which characterize administrative acts.

It is more complex to determine if AIP can be considered, instead, as an example of “automated decision-making” (ADM), a well-known notion in EU law, and in particular in data protection law. The General Data Protection Regulation provides for the rights of information on ADM (Articles 13(2)(f) and 15(1)(h) GDPR), and the right to contest a decision based solely on automated processing (Article 22). The scope and content of these provisions, also in a public context, have been widely discussed by the legal doctrine (see, for example, Demková, Edwards and Veale, Hofmann, Olsen et al., Oswald). However, EU case law regarding the meaning of (public) ADM according to the GDPR has not yet been formed.

Some directions are contained in the Guidelines of the Article 29 Working Party on ADM and profiling in the GDPR. The Guidelines include, for example, targeted advertisement as possible relevant “decision-making” for the purposes of Article 22 of the GDPR. One can argue that adverts – even if targeting specific (groups of) persons based on profiling – are not really “decisions” (Veale and Edwards). On the other hand, if adverts can be considered as such, then maybe also the information issued by a public authority through a chatbot, or other automated channels, can be a form of ADM subject to the provisions of the GDPR. The main elements of Article 22 of the GDPR were again analyzed in the very recent opinion of the CJEU Advocate General Pikamäe on the pending SCHUFA Holding case (C‑634/21). According to the Advocate General, the etymology of the word “decision” implies the idea of an “opinion” or “position” on a certain situation, but also a binding character, which distinguishes it from a mere recommendation without legal factual consequences. Limiting the scope of ADM to binding acts would probably exclude automated information, which – as I said above – are in principle not binding. However, in another part of the opinion the Advocate General seems to have a broader approach, observing that the absence of a definition for the term “decision” in EU law can justify a wide scope that may include several types of acts, capable of affecting data subjects in many ways. Based on this approach, AIP able to affect the behavior of the data subject, even in absence of a legal constraint, may be considered as ADM.

Of course, even if we include AIP in the scope of ADM, the rules of the GDPR, and specifically Article 22, apply only when all other relevant conditions are in place. First, processing of personal data needs to be involved: this is not always the case if we consider information not related to natural persons or not concerning an individual situation (such as, for example, general clarifications on the interpretation of a rule by public authorities). Second, the decision must be found “solely” on automated processing and must produce legal effects or substantially affect the data subject. These two requirements can both be interpreted in broad terms, and this is the approach that both the Article 29 Working Party Guidelines and the Advocate General Pikamäe seem to follow. However, the identification of a decision solely automated and having relevant legal or substantial effects can still be complicated (see on this Binns and Veale) and needs probably to be assessed on a case by case basis, analyzing once again the specific characters of the AIP involved and the characteristics of the data subject. Also, possible exceptions to the application of Article 22 of the GDPR, and namely the authorization of the decision by Union or Member State law (Article 22(2)(b)), may be in place.

AIP and the principles of EU administrative law

Even if the rules of the EU data protection system are not applicable, the position of citizens and users when AIP is employed can be protected according to the principles of EU administrative law.

Since, as I established above, AIP has not the nature of a binding administrative act, one of the principles that can be discussed is the protection of legitimate expectation, in case the representation issued with the AIP is favorable to the addressee but not confirmed by the following decisions of the competent authority. The CJEU has recognized legitimate expectations deriving from representations with different means and different forms. The representation needs, however, to be sufficiently precise and specific (case C-414/08 – Sviluppo Italia Basilicata SpA) and not in contrast with unambiguous provisions of EU law (case C‑516/16 – Erzeugerorganisation Tiefkühlgemüse). Nonetheless, when this last requirement is not met, CJEU case law does not preclude the protection of legitimate expectations by national law, by means of compensatory damages, even in case of representations in contrast with unambiguous EU law (case C-36/21 – Sense Visuele Communicatie,).

If no specific requirement on the form of the representation is needed, even informal digital instruments can in principle generate a legitimate expectation. The Italian administrative judge, for example, has recognized a source of legitimate expectation in the replies to “frequently asked questions” (FAQs) published on a website. In this context, also representations made by the public authorities through automated information-providing could be considered capable of generating a legitimate expectation.

However, AIP needs to be attributed directly to the authority that is competent for the relevant administrative procedure and the final decision. The competent authority needs to be held accountable for the information provided and the relevant legitimate expectation generated. The accountability and responsibility of the authority is directly linked to a second principle that can be considered particularly relevant for AIP: the principle of transparency. Transparency is considered as an important rule in general for ADM in public and private contexts. Also with AIP, the authority needs to ensure that the cases in which automated information is issued, its technological features, the relevant responsible offices and officials are registered, traced, and made publicly available. Transparency is, in fact, instrumental for the authority to recognize and respond of the automated information provided (Hofmann).

This element can determine an increase in the accountability and reliability of the public administrations: if all AIP is duly traced according to transparency criteria, it is possible to rely on the information provided by the competent authorities and hold them accountable for it, thus stimulating the consistency of the instructions provided by administrations even through informal means.

A final guiding principle that can have a central function for AIP is administrative empathy, which conveys the idea of understanding and adapting to the individual situations of each citizen and user. Empathy is not founded specifically in positive (EU) administrative law, but considered a component of the principle of good administration (Ranchordás). In the context of AIP, especially if used to substitute informal communication, empathy would be needed to adapt the message to the characteristics of the addressee (as in Di Porto), but also to identify the cases in which AIP needs to be integrated or perhaps substituted by human intervention.

AIP, AI Act and risk assessments

Automated information-providing by public administrations in the context of EU law can therefore be regulated by data protection law and the principles of administrative law. The rights and principles analyzed above can be seen as expressions of the idea of fairness, which is a core element both of the GDPR and of the good administration principle. Some of the principles are also recalled in the draft EU Artificial Intelligence Act: transparency obligations, for example, are expressly included in the draft for (high or low risk) systems that interact with natural persons, as in the AIP idea here discussed. To some extent, AIP systems may even be considered as high-risk AI systems, if they are seen as a form of evaluation of the eligibility for public assistance, benefits, and services (Annex III of the draft regulation). Even when this is not the case, it may be advisable for public authorities to assess the risks involved in the use of automated information-providing, considering the specific rules and principles applicable to them and their relationship with the public.

Marco Fontana
PhD researcher in legal sciences at the University of Pisa

Marco Fontana is a PhD researcher in legal sciences at the University of Pisa. His fields of research are administrative law, economic public law, and legal technology.


Featured Artist