Introduction
The use of Artificial Intelligence (AI) for automated decision-making (ADM) raises several legal issues well discussed in the literature (see here and here). One of those problems concern the difficulties of providing an explanation of decisions made by an AI system due to characteristics of these technologies (just consider the “black box problem”; the correlation-based logic; the difficult programming of rules to apply to algorithms for the correctness of the logical-juridical process leading to a decision; biases deriving from datasets; etc.). Without delving into the problems related to the concept of explanation and explainability of AI, this paper will analyse the explanation of automated decisions from two perspectives. On the one hand, the requirement to provide explanation under the duty to give reason in EU administrative law. On the other hand, the requirement to provide explanation under the so-called “right to explanation” (RTE). Indeed, the last version presented by the European Parliament of the AI Act (AIA) seems about to introduce – barring surprises – the widely debated RTE when employing AI for ADM.
The introduction of such a right will have a significant impact not only in the public sector but also in the private law domain. Indeed, starting from the assumption that, except in cases provided for by law (e.g., the duty to give reason), a decision made by a human being does not entitle the decision-subject to an explanation, with the RTE the decision-subjects will have – under specific conditions – a right to a meaningful explanation when an AI system is used to make the decision.
The right to a reasoned decision
The duty to give reasons has the functions of giving the courts the possibility of exercising their power to review the legality of the decision, and the individuals the opportunity to protect their rights (receiving enough information to be able to determine whether the decision is well-founded). In this regard, the CJEU often invokes Article 47 CFR to support the requirement of reason-giving. As explained by Demková and Hofmann, in conjunction with Article 47 CFR sufficient reasoning is both a procedural and a substantive guarantee which allows the plaintiffs to decide on the potential success of their claim and thus to effectively prepare defence. Accordingly, reasoning must enable a person “to defend his or her rights in the best possible conditions and to decide, with full knowledge of the relevant facts, whether there is any point in applying to the court with jurisdiction” (see here).
In providing the reasoning, the authority is required to explain the facts and legal considerations having decisive importance in the context of their decisions. However, the reasoning does not have to include all points of fact and law, since it has to be assessed considering its context and “all the legal rules governing the matter in question”.
According to the CJEU, the duty to state reasons varies according to the type of act, the nature, the content of the decision, the interests of the individuals affected by such decision, the specific context, and the legal rules governing the matter in question.
For instance, in the case of acts of general application, the reasoning has to provide the legal justification, an explanation of the situation that led to the adoption of the act and the objectives which it is intended to achieve. A more stringent duty to state reasons is required for acts addressed to individuals. Indeed, the addressee must be able to assess the lawfulness of the act affecting him or her and possibly challenge it.
Another element affecting the statement of reasons is the discretion of the body adopting the act. In the case of discretionary acts, the reasoning should be more rigorous, to facilitate the understanding of the reasons, as well as the exercise of judicial review, and may not be limited to an indication of the factors analysed. According to the case law, in case of discretionary acts it is necessary to indicate objective and predetermined criteria on which such acts can be adopted.
Also, according to the CJEU, when the act is part of an established case law, measures may be reasoned in a summary manner (as long as the judicial review is possible). Conversely, if the body states a new principle or applies it in a different way (i.e., in case of exceptional measures) the reasoning should be more rigorous.
The “right to explanation”
The proposal of the AIA introduces, at Article 68c, a provision called “right to explanation of individual decision-making”. Such provision will pose a new duty on the AI deployer, which is “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity” (therefore, also EU bodies will have to comply with such a provision). More precisely, “[a]ny affected person subject to a decision taken by the deployer on the basis of the output from an high-risk AI system which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety, fundamental rights, socio-economic well-being or any other of the rights deriving from the obligations laid down in [the AIA], shall have the right to request from the deployer clear and meaningful explanation pursuant to Article 13(1)”. Such explanation will have to include: (i) the role of the AI system in the decision-making procedure, (ii) the main parameters of the decision taken, and (iii) the related input data.
With this provision, the AIA – as the Regulation (EU) 2016/679 (GDPR) – seems to translate constitutional values (such as due process) in the context of automated decision-making, by fostering the principle of the rule of law to the private scenario. In a modern society, where non-state actors have emerged as new dominant actors besides the State, individuals are vulnerable and deserve further protection. Accordingly, the AIA would require explaining the logic of deployers’ actions and decisions as a guarantee to (re)balance powers.
In this scenario, the AIA seems to take a position on the debate that arose around the existence of the RTE in the GDPR, which was arguably inspired by the right to a reasoned decision. Indeed, both these rights aim to provide the decision-subjects the reasons behind the decision to potentially start a trial, assert the legality of the decision, defend their rights in the best possible conditions and decide how to behave with full knowledge of the relevant facts. Also, similarly, these rights try to rebalance the informational and power asymmetry between decision-makers and decision-subjects. Hence, arguably the RTE can be considered a “private law” declination of the right to a reasoned decision.
However, there is a fundamental difference between the right to a reasoned decision and the RTE: whereas the reasoning must inform that the measure was adopted secundum legem, the explanation is sufficient when it explains that the decision was not taken contra legem. The EU law already provides for situations in which the reasoning aims to justify the legitimacy of the decision. On the contrary, for example, private companies can – according to the freedom to contract – take a decision (which does not violate, e.g., the right of non-discrimination) and such decision is legitimate and justified by the freedom to contract. For example, a bank may legitimately decide not to grant a loan to a person for a given behaviour, such as gambling addiction: this decision is lawful regardless of whether the person has a right to obtain information about that decision and provided that it is not contra legem (e.g., it violates the right of non-discrimination).
In summary, whereas the right to a reasoned decision must provide information concerning the subsumption of algorithmic parameters to rules of positive law or principles of law, the explanation does not have to provide this information because the RTE is released from the obligation to follow the parameters of administrative law and procedures.
The relationship between the right to a reasoned decision and the RTE. Conclusions
In dealing with two similar rights and considering what discussed above, one problem is understanding if the right to a reasoned decision “absorbs” the RTE. In this regard, considering the content requirements of such rights, it is arguable that the right to a reasoned decision does not fully absorbs the RTE. Indeed, whereas the right to a reasoned decision has stringent requirements (provided for by case law) and must provide information concerning the subsumption of algorithmic parameters to rules of positive law or principles of law, the explanation does not have to provide this information because the RTE is “released” from such obligation (as seen above). Moreover, the AIA add some further content requirements (the role of the AI system in the decision-making procedure, the parameters of the decision, and the related input data): this approach seems to tackle the problem of identifying the elements deemed to be essential to delegate the decision to an AI system and, at the same time, provide information which enables the decision-subject’s understanding of the decision and eventually challenge it. In other words, the AIA seems to identify the information which may help public authorities to comply with its reason-giving duties when deferring a recommendation provided by an AI system – even if the reason-giving requirements are more demanding (for a thorough and interesting analysis, see here). In this regard, it is arguable that the content requirements (to date) identified in the AIA are not sufficient. Without dwelling here on what should be required in an explanation, arguably some elements can be fundamental to enable one’s understanding of a decision, such as (i) input data having decisional impact or (ii) data features.
A further point should be stressed. The RTE in the AIA is applicable only when decisions have significant impact, and therefore it has a more limited applicability than the right to a reasoned decision (also as a result of the latter being a general principle of EU law – its applicability is warranted in all contexts of individual decision-making in the EU). Not having to explain decisions that do not have a significant impact is, however, a reasonable approach taken by the EU since it may be necessary to strike a balance between the need for explanation and the costs associated with the difficulties of explaining automated decisions: the AIA, in this sense, formalizes what such cases are. Indeed, the utility of explanations must be balanced against the cost of generating them. Accordingly, not every decision should be explained, but only those which significantly impact the decision-subjects. Especially, what renders a decision problematic and thus worthy of an explanation is the impact such a decision may have without knowing how and why it has been taken.
In light of these brief considerations, the introduction by the AIA of an explanation duty addressed to private and public actors can be problematic. As noted by Hofmann, “[m]ixing public and private obligations is problematic since each have different legal obligations as to their procedures. Arguably, the use of AI in public decision-making should better be integrated into a general EU administrative procedures act and address specific effects of ADM on decision-making and rule-making procedures”. In this regard, the application of the RTE should take inspiration from the right to a reasoned decision. Indeed, ADM processes are governed by rules and transparencies policies specific to the sector in which they are used: the explanation’s content should consider such rules, and also the context and goal for which the RTE is exercised. A “general”RTE applicable to all areas of law risks not being effective because it would ignore the sector-specific features and the protection of other competing rights. Therefore, the transparency requirements and the information to provide should differ depending on the type of right the explanation must safeguard and the purpose for which such right has been exercised.

Jacopo Dirutigliano
Jacopo Dirutigliano received his PhD in "Law, Science and Technology" (Joint International Doctorate) from Alma Mater Studiorum – Università di Bologna and University of Luxembourg. He is a lawyer registered with the Turin Bar Association. He is “cultore della materia” in Legal Informatics at Alma Mater Studiorum – Università di Bologna, and member of the “Law and Technology Group” of the University of Turin. The main topics of his research field are the right to explanation, explainable AI and information technology law (especially data protection law).