HomePostsDigital RightsThe Right to Contest Automated Decisions

Related Posts

The Right to Contest Automated Decisions

Reading Time: 7 minutes
Print Friendly, PDF & Email

Assessing Transparency and Explainability from a Legal Perspective

The forefront of technological innovation is automated decision-making (ADM). Pushed by Artificial Intelligence (AI), automation is permeating administrative, judicial and private decision-making. Automation promises to speed up human decision-making, delivering accurate and objective results. However, in contrast with the promise of objectivity, neutrality and accuracy, automated decisions are subject to the same flaws of human decision-making, including errors and bias (see Zerilli et al.). Some examples that echoed in the media include Amazon’s AI recruiting tool discriminating against women, COMPAS, the risk-assessment software that was biased against black defendants, and the algorithm Frank that was punishing food delivery workers exercising their right to strike. As a cornerstone principle of the rule of law, individuals have the right to contest an adverse decision. Does the same apply to automated decisions?

This post focuses on the role of the duty to give reasons and procedural equality for contesting decisions (1), illustrates what challenges automation raises in decision-making (2), and shows how a proper legal interpretation of transparency and explainability can tackle these issues (3).

1. The Role of the Duty to Give Reasons and Procedural Equality in Contesting Decisions

The right to challenge human decisions is a legal mechanism that allows having an adverse decision reconsidered. Individuals may ask for an internal review – interacting with the decision-maker – or apply for an external review. In the latter case, the parties will present their case in front of an impartial third authority – such as tribunals or ombudsmen – which will undertake thorough scrutiny over the factual and legal grounds of the decision. In line with the right to a fair trial (Article 6(1) ECHR), the principle of equality of arms requires a fair balance between the opportunities afforded to the parties involved in civil or criminal cases.

Before the review process, an essential element for contestability is the statement of the reasons. According to the Court of Justice of the European Union (CJEU), this obligation allows determining whether the decision is well-founded, if an error vitiates it, and if its validity should be contested (CJEU, Case T-181/08, paras 93-96). The statement of reasons is, therefore, instrumental in challenging the decision. 

European and national law sets rules for review and duty to give reasons for human decisions (e.g. Articles 41 and Article 47 Charter of Fundamental Rights). Even between private parties, laws, contractual clauses, or internal rules set standards and requirements for the decision-making process. When bankers decide whether to grant a loan, they follow internal and external rules to determine the applicant’s eligibility. Landlords or landladies cannot simply decide to evict tenants arbitrarily but must follow the contractual clauses on termination and the laws safeguarding tenants.

ADM, instead, is predominantly regulated by the legal framework of data protection law, especially Article 22 General Data Protection Regulation (GDPR). This provision requires that, in cases where a fully ADM is allowed (Article 22(2)(a) GDPR), the data controller shall implement suitable measures to safeguard the data subject’s rights, at least the right to obtain human intervention and the right to contest the decision. Article 22 GDPR simply confirms a cornerstone principle of the rule of law: a decision – no matter if human or automated – impacting individuals’ lives should be contestable. Therefore, we should demand respect for the same standards – no lower, no higher – used for human decision-making in that specific sector (see similarly for administrative law Olsen HP and others). When considering ADM, we should always ask: what rules would apply if a human took that decision? 

2. Specific Challenges Raised by ADM

However, ADM raises some specific issues compared to human decision-making that impact the right to a reasoned decision and the procedural equality between parties during the review. 

Firstly, automated systems do not provide a reasoned decision as humans are able and required to do. This aspect is problematic when the law requires the duty to give reasons for that specific type of decision. Take, for example, the phenomenon of ‘concealed dismissal’ of platform workers. Instead of being notified of the dismissal of the contract, food delivery riders have had their account silently deactivated (see a judgment by an Italian labour court declaring this dismissal invalid). A second example comes from the use of ADM in public administrative law. In 2019, the Italian Consiglio di Stato declared the use of ADM to allocate teachers in violation of administrative law due to the impossibility of understanding how the algorithm allocated the candidates.

Secondly, the well-known problem of algorithmic opacity, due to the technical complexity of ADM systems and secrecy of the information (Burrell and Pasquale). Several examples show the detrimental effect of information asymmetries in civil or criminal litigation. In a famous US criminal case (State v. Loomis) involving a risk-assessment system, the defendant was denied access to information on the software’s design that was crucial to challenge the accuracy and validity of the risk assessment (see also the analysis by Quattrocolo). Similar difficulties are encountered in other areas of law, such as competition law (Patterson) or administrative law (see the judgment by the Italian administrative court declaring that ‘the algorithm must be subject ex-post to full knowledge and scrutiny’). 

Parties shall have access to information on the design of the software to demonstrate a violation of laws or rights perpetrated by the system. Design transparency encompasses access to the detailed software description, such as algorithms, training data and methods used. The recent Proposal for an AI Act (AIA) supports this type of transparency through ex ante requirements. More specifically, the Proposal obliges software providers to keep the technical documentation up to date (Article 11 AIA). The technical documentation, whose content is listed in Annex IV, comprises crucial information for review of ADM, including the design specifications, the system architecture, the training data, the methods and techniques used. However, this information is typically kept secret for protecting business interests. Although software providers must draw up the documentation, they are not obliged to disclose it (unless, according to Article 23 AIA, upon request by the competent authority, to demonstrate conformity with Chapter II of the Proposal).

Furthermore, several provisions of the GDPR provide information on the design of the software when ADM is involved (Article 22, 13(2)(f) and Article 15(1)(h) GDPR). In these cases, the data subject should be provided with meaningful information about the logic involved, the significance and envisaged consequences of such processing. However, such information is not sufficient to allow the opposing party to challenge the lawfulness, validity and accuracy of the automated process that led to the contested decision. Algorithmic opacity, therefore, challenges the equality of arms principle when parties do not have access to information on the design of the software.

In the past years, the literature proposed transparency and explainability as solutions to overcome these challenges. Researchers on AI are familiar with these notions, even though it is still challenging to find shared definitions of these terms. In the common sense, transparency entails ‘the possibility to have a complete view on a system, i.e. all aspects are visible and can be scrutinised for analysis’ (Hamon et al.). However, transparency has been criticised as it necessarily requires a disclosure of information that would infringe business secrecy and even have an adverse effect, such as the risk of ‘gaming the system’ (see Lepri et al.). Against this backdrop, an idea that has gained prominence in the debate is to provide ‘explanation’, rather than transparency.

Explanation has been defined as a ‘human-interpretable description of the process by which a decision-maker took a particular set of inputs and reached a particular conclusion’(Doshi-Velez et al.). In the context of AI, explainability requires that ‘the decision made by an AI system can be understood and traced by human beings’ (Ethics Guidelines for Trustworthy AI).  Among others, the concept of counterfactual explanation has attracted significant attention (Wachter et al.). A counterfactual explanation is a statement that provides insight into which facts could be different to arrive at the desired outcome. It takes the following form: ‘You were denied a loan because your annual income was £30,000. If your income had been £45,000, you would have been offered a loan’. As proposed by Wachter et al., it should be provided regardless of the nature of the decision or its effects (“unconditional counterfactual”). While appealing, explainability needs to be assessed from a legal perspective. Can explanation fulfil the duty to give reasons? Is explanation enough to ensure procedural equality?

In 2018, Hildebrandt clarified that, although important, an explanation is not a justification. Explaining how the system reached a conclusion is far different from showing if the conditions set by law for that decision are fulfilled. Take as an example the ranking algorithms for food delivery riders. In Italy, the termination of the working contract is regulated by law and requires the employer to notify the dismissal in writing and specify its reasons. Termination can only occur for ‘just cause’ or for ‘justified objective or subjective reason’. Even if the system can explain why the rider’s ranking is low, this is not enough to justify a dismissal based on their performance. What is missing is stating the legal reasons justifying why these facts (low ranking/low performance) fulfil a dismissal for justified subjective reason.

If an explanation is not a justification, then what is it? In my view, the explanation provided by the system can support contestability only if it provides the factual grounds of the decision. Article 22 GDPR could serve as a ground for demanding this type of explanation ex post (in light of Recital 71 GDPR). As to the duty to give reasons, if an explanation provides the facts, their legal assessment shall be a different activity that a human should perform. Consequently, human oversight should also be intended as providing legal justification from the factual grounds offered by the machine. Likewise, in the review phase, an explanation can only be evidence of the factual grounds at the basis of the decision. Still, it cannot prove by itself the existence of systematic discrimination, bias or flaws in the ADM system.

In reviewing of automated decisions, information on the software’s design is evidence (see also Article 16 of the recent Proposal for a Directive on platform work recognizing ‘confidential information, such as relevant data on algorithms’ as evidence). To ensure procedural equality between parties, we need a deep revaluation of design transparency. 

Two extreme situations should be avoided. On the one hand, overprotecting trade secrets, therefore creating a sort of evidentiary privilege; on the other hand, advocating for full ex ante transparency, as it would excessively undermine business interests. Fortunately, transparency is gradable. 

Design transparency should be conceived as an ex-post protected disclosure that ensures a fair balancing between different interests at stake. Article 9 of the Directive on Trade Secrets offers a model for disclosing secrets in judicial proceedings (see also Maggiolino and the European Commission Guide on confidentiality claims in antitrust procedures). Software providers should grant access to information only to the parties and their attorneys using non-disclosure agreements. In this way, both interests can be safeguarded.

To conclude, contesting ADM must resemble contesting human-decision making. Rules governing duty to give reasons and review of automated decisions follow the law governing the sector and type of decision. This blog post strives for a revaluation of transparency and explainability from a legal perspective. On the one hand, explainability can support the duty to give reasons by providing factual grounds; on the other hand, design transparency as protected disclosure allows for scrutiny and ensures procedural equity between parties without undermining business interests.

Acknowledgements

I would like to thank the Working Group ‘The Digital Public Sphere’ of the European University Institute, the interdisciplinary Research Group on Law, Science, Technology & Society (LSTS) of the Vrije Universiteit Brussel and the Artificial Intelligence and Legal Disruption Research Group (AI LeD) of the University of Copenhagen for the thought-provoking discussions that helped to shape the ideas of this post.

+ posts

Co-founder and chief editor of DigiCon
Post-doc researcher at Hertie School working on AI and human rights

[citationic]

Featured Artist