HomePostsSymposiaThe cooperative legal construction to promote the explainability and justification of high-risk...

Related Posts

The cooperative legal construction to promote the explainability and justification of high-risk automated decision-making (ADM) systems

Reading Time: 8 minutes
Print Friendly, PDF & Email

In this post, I wonder whether the regulation of the right to explainability of high-risk ADM systems—arising from the General Data Protection Regulation (GDPR)—is adequate for the protection of users’ fundamental rights.

Introduction

The fourth industrial revolution is based on the development of AI systems based on algorithms that, through correlations, recognize patterns in datasets (data-driven) and then make decisions; unlike expert systems, which rely on code to produce outputs – these are deterministic systems (code-driven). The aforementioned revolution is transforming culture. In turn, law regulates intersubjective behaviors. Law is an open subsystem of culture as a system; therefore, law regulates social interactions in the digital ecosystem. Likewise, the “law in good shape”,1John Finnis (1980). Natural law and natural rights (Clarendon Press), at 270. i.e. the rule of law, is one of the means of politics—which goal is the common good—together with: human rights and democracy.

Transparency is one of the rule of law requirements. The traditional conditions for the configuration of transparency and opacity in constitutional democracies are: government should be transparent about its operations, and its citizens must be shielded from governmental scrutiny.2See Mireille Hildebrandt (2016). Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar Publishing). In turn, law protects legal interests within a framework of predictability. Hence, law draws the boundaries of its system trying to exclude—in principle—chance, i.e., unpredictability, from its continent. Consequently, Holmes said that the object of our study is the prediction of the incidence of the public force through the instrumentality of the courts. Similarly, predictions are a central theme of machine learning (ML) algorithms, which are about the mathematical application to huge amounts of data in order to infer probabilities. Likewise, the requirement of transparency strengthens the predictability that is key to the law’s effectiveness and the promotion of trust as a pillar of the law construction in a cooperative way. In other words, law uses transparency as a tactic within the predictive strategy as a means to cooperate.

For example, the requirement of transparency demands that judges decide the case fairly by arguing—that is, giving reasons—for their decision; this is the purpose of the judicial function. Thus, in constitutional states—e.g., those that make up the EU—there is a reference to the “argumentative turn” because of the importance of this task of normative functioning in the face of the growth of hard cases where gaps are declared and constitutional principles are weighed as a prior step to the rule elaboration in the concrete case. This provides solutions with the “appearance” of justice. Along these lines, it was said that justice must not only be “done” but also that—manifestly and undoubtedly—justice must be seen to be done.3See Rex v Sussex Justices, ex parte McCarthy. Precisely, argumentation constitutes a means of explaining and justifying these responses of justice. This inspires citizens’ confidence in law, thereby reinforcing the cooperation and effectiveness of legal norms.

Instead, law requires an explanation—does not explicitly demand justification—of ADM systems, often described as algorithms. “The translation of technical concepts into intelligible and understandable formats is often referred to as ‘explainability’” (Leslie and Kazim). The explanation provided by the system can support contestability—Article 22 GDPR—, that is a rule of law’s requirement, only if it provides the factual grounds of the decision. Likewise, Hildebrandt asserts that knowing how the algorithm came to its conclusion does not imply that it is in accordance with law, thereby differentiating the explanation of the ADM from its justification.

Explanation arises, e.g, from Recital 63 GDPR, that gives the data subject the right to know and receive communications about the logic underlying any data processing about the ADM.4From a perspective of the different types of explanation in correlation with the technical and legal factors that affect the feasibility of the explanation and information offered to the person affected by an ADM, see Brkan and Bonnet. Recital 63 GDPR is integrated with Article 22 GDPR, which regulates the ADM, as well as Recital 71 GDPR, which lists explanation as one of the safeguards in the case of ADM. This potentially grants the data subject the right to an explanation of the technology.

Sometimes, algorithmic correlations block the intended transparency through explainability, unlike, e.g., judicial decisions, which are based on the rules of causality. Hence, Cabitza et al. stress that “providing AI with explainability, that is the capability to properly explain its own output, is more akin to painting the black box of inscrutable algorithms (such as deep learning or ensemble models) white, rather than making them transparent. What we mean with this metaphoric statement is that explainable AI (XAI) explanations do not necessarily explain (as by definition or ontological status) but rather describe the main output of systems aimed at supporting (or making) decisions: this is why we described XAI explanations as a meta output. As such, explanations can fail to make the output (they relate to) more comprehensible, or its reasons explicit; or, even, they can be wrong”.

On the other hand, requiring transparency of the said algorithms could violate the rule of law principle of “not require the impossible”5See Lon L. Fuller (1969). The Morality of Law (rev. edn., Yale University Press), cap. II. since our mind is not able to comprehend the clarity of the whole process of that algorithmic operation that is based on correlations and not on causalities. Hence, the difficulty in pinpointing why an AI system achieved a certain outcome or decision can make ADM processes impenetrable, turning them into black boxes. Also, the one-sided explanation of those responsible for the algorithms’ operation could increase the trust in AI systems by reinforcing our tendency towards automation bias about products—e.g., facial recognition systems, military drones, clinical decision support systems, and so on- that can cause harm, based on “seemingly” predictable but truly “unpredictable” algorithms; unlike court rulings where the argument is made to “do justice” and to “appear” to do justice.  Consequently, we should not take the perceived and actual utility of XAI, based on the GDPR, for granted.6See Cabitza et al. above.

Algorithmic center intervention to reduce the unpredictability of high-risk ADM systems

Therefore, I propose the creation of an Algorithmic center—public and private, composed of an interdisciplinary team and with citizen participation7See Matías Mascitti (2022). La función preventiva de los daños causados por la robótica y los sistemas autónomos. Revista Brasileira De Direitos Fundamentais & Justiça, 16(1), 15–54.—for the certification and monitoring of those ADM systems of high-risk to health and safety or fundamental rights of individuals8Annex III of the AI Act. during the evolution of their operation. In this way, we could build XAI on empirical evidence that would strengthen the explanations about the algorithms’ usefulness based on their contrast with the dynamic reality, thus, reducing their unpredictability.

The public-private nature of the Algorithmic center could increase cooperation among the various stakeholders as it could be an arena for sharing information and finding collaborative solutions. In consequence, the “multi-side” explanation would be given instead of the “one-side” explanation required by the current regulatory framework. Furthermore, I propose that the Algorithmic center evaluate the risk level in the static and dynamic stages, i.e., during the total product lifecycle, by the following means: precertification program creation, firms periodic reports to the Algorithmic center on implemented updates and performance metrics, requiring developers to provide public information about the data used to validate and test ADM systems so that end users can better understand their benefits and risks, review of the change control plan alone and the need to approve a new version, and the Algorithm change protocol implementation—which contains the types of anticipated modifications as a result of the variations caused by ML.9See the US FDA’s AI/ML- SaMD Action Plan.

In turn, the evaluation will not inhibit that at any time during the algorithm evolution, the interested party has the right to modify the risk classification of the ADM system by means of a well-founded request. This differs from meta-regulation (“meta because one (macro) regulator oversees another (micro) regulator in their management of risk”) (Zingales) imposed by the AI Act and the Digital Service Act.

For example, the AI Act states a unilateral ex-ante and ex-post control of algorithmic information.10See articles 12 and 13—I think it is inappropriately titled transparency based on the reasons given in this paper. In a similar sense, Recital 96 of the Digital Services Act (DSA) says that

the Digital Services Coordinator of establishment or the Commission may require access to or reporting of specific data, including data related to algorithms. Such a requirement may include, for example […] functioning and testing of algorithmic systems for content moderation.

The European Centre for Algorithmic Transparency (ECAT) will “act as the Commission’s technical support service for compliance with the DSA”; it will “be a vehicle for prospective knowledge and high-quality research”; and it will foster the network creation around XAI.

On the other hand, Tutt proposes the creation of a Food and Drug Administration’s (FDA)-like agency dedicated to algorithmic regulation. The core of his proposal is to approve complex and dangerous algorithms when they are shown to be safe and effective for their intended use and that satisfactory measures will be taken to prevent their illegitimate use. Tutt argues for the appropriateness of an FDA-inspired agency model in that “the products the FDA regulates, and particularly the complex pharmaceutical drugs it vets for safety and efficacy, are similar to black-box algorithms.” I do not agree with this analogy because algorithms—unlike foods and drugs—by the dynamics of their way of learning denote their complexity and the unpredictability of some of their actions.

Moreover, Malgieri and Pasquale propose, as a principle, the high-risk AI systems ban with reversal of the proof burden to justify why these systems are not illegitimate. They recommend focusing on algorithmic justification because of the impossibility of causally explaining the algorithmic process in some assumptions—e.g., in deep learning algorithms—that lead to the ADM. Instead, I propose the ADM systems operation suspension in the event of non-compliance with the standards issued by the Algorithmic center. Hence, this post’s aim is not, in principle, to promote prohibition, but the creation of an entity of a mixed nature, i.e., the Algorithmic center, which constitutes a means for the construction of efficient and safe algorithms within an explainability and justification strategy.

This is based on the law alignment as a means of regulating social interactions fairly in space, time, and scale. To this end, coercion is not sufficient for the achievement of such algorithms; on the contrary, the cooperative strategy fits the speed, complexity, and characteristics -e.g., algorithmic opacity- of the digital era. Let’s see. The Algorithmic center could weigh the risks against the benefits of AI within a framework of human rights protections. To do so, they first look at the state of ADM systems development. Then, they decide on the appropriate measure; this could be part of the AI systematic regulation.

In sum, instead of relying on the unilateral coercion reflected in all risky ADM systems prohibition, as a principle—unless proven otherwise—I offer a scheme where explainability is a result of the algorithms’ cooperative development process, with the consequent increase of their security and citizens’ trust in algorithms that enjoy high robustness according to their evaluation within a collaborative framework and communication between diverse cultural subsystems that allow the necessary legal information—which will be a support for the justification of them—to computer science, which will reinforce the citizens’ protection by the algorithmic design. The justification must observe the entire EU regulatory system, from, for example, human rights to the GDPR principles—i.e., fairness, purpose limitation, storage limitation, accuracy, data minimization, accountability, integrity and confidentiality. This will serve as a means of preventing damage caused by ADM systems in a regulatory framework that does not stifle innovation and impedes competition.

Conclusion

I think that the creation of the Algorithmic center is part of the legal strategy of building law in a cooperative or “rational” way since human beings use reasons to justify themselves and to convince others, two activities that play a fundamental role in their cooperation.11See Hugo Mercier and Dan Sperber (2017). The Enigma of Reason. Harvard University Press, pp. 107, 221 and 222. Cooperation constitutes a key tool for the evolution of the human species12See Michael Tomasello (2016). A Natural History of Human Morality (Harvard University Press). and can expand the time dimension,13See Roy Baumeister (2016). Collective Prospection: The Social Construction of the Future. In: Seligman, M.E., Railton, P., Baumeister, R.F., Sripada, C., Homo prospectus. Oxford University Press, p. 137. being thus a means to reinforce long-term prediction. In turn, cooperation increases trust in “healthier” algorithms and decreases -as I said – the unpredictability that AI systems could present in the long run.14See Roman Krznaric (2020). The Good Ancestor. A radical prescription for long-term thinking. The Experiment. New York. Hence, cooperation between firms, citizens, and State could facilitate [a] the algorithmic protection by design, [b] the State’s in-depth knowledge, as the “common good guarantor”, of the algorithms’ criteria embedded in the ADM systems, [c] the citizens’ empowerment through their participation in the Algorithmic center and the guidelines for their education, and [d] XAI robustness based on empirical evidence and justification under the Constitutional states’ normative systems, which contain the fundamental rights as their axioms, giving “appearance” of justice to the ADM.

Suggested Citation

Matías Mascitti, ‘The cooperative legal construction to promote the explainability and justification of high-risk automated decision-making (ADM) systems’ (The Digital Constitutionalist, 2 February 2023). Available at https://digi-con.org/transparency-symposium-mascitti

  • 1
    John Finnis (1980). Natural law and natural rights (Clarendon Press), at 270.
  • 2
    See Mireille Hildebrandt (2016). Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar Publishing).
  • 3
    See Rex v Sussex Justices, ex parte McCarthy.
  • 4
    From a perspective of the different types of explanation in correlation with the technical and legal factors that affect the feasibility of the explanation and information offered to the person affected by an ADM, see Brkan and Bonnet.
  • 5
    See Lon L. Fuller (1969). The Morality of Law (rev. edn., Yale University Press), cap. II.
  • 6
    See Cabitza et al. above.
  • 7
    See Matías Mascitti (2022). La función preventiva de los daños causados por la robótica y los sistemas autónomos. Revista Brasileira De Direitos Fundamentais & Justiça, 16(1), 15–54.
  • 8
    Annex III of the AI Act.
  • 9
    See the US FDA’s AI/ML- SaMD Action Plan.
  • 10
    See articles 12 and 13—I think it is inappropriately titled transparency based on the reasons given in this paper.
  • 11
    See Hugo Mercier and Dan Sperber (2017). The Enigma of Reason. Harvard University Press, pp. 107, 221 and 222.
  • 12
    See Michael Tomasello (2016). A Natural History of Human Morality (Harvard University Press).
  • 13
    See Roy Baumeister (2016). Collective Prospection: The Social Construction of the Future. In: Seligman, M.E., Railton, P., Baumeister, R.F., Sripada, C., Homo prospectus. Oxford University Press, p. 137.
  • 14
    See Roman Krznaric (2020). The Good Ancestor. A radical prescription for long-term thinking. The Experiment. New York.
Matías Mascitti

Lawyer and PhD in Law from the National University of Buenos Aires (UBA) and Visiting professor at the Center for Technology and Society at FGV Rio de Janeiro.

[citationic]

Featured Artist