This post is part of the DigiCon symposium Transparency in Artificial Intelligence Systems? Posts from this symposium will be published over the next Thursdays. If you are working on topics related to AI and transparency, follow these posts and take a look at our call for blog posts.
As we explained in Part One, meaningful transparency requires that people lacking specific technical expertise—i.e., the vast majority of us—are able to not only have oversight of an algorithmic tool but also to understand and test how it works. In Part Two, we argue that one way of achieving meaningful transparency is through the disclosure of an executable version.
The translation of technical concepts into intelligible and understandable formats is often referred to as ‘explainability’. A study carried out by the Berkman Klein Center into the growing consensus on key thematic principles of AI identified that 78% of the documents in their dataset included reference to the principle of explainability. While transparency is still a key underlying principle, explainability is vital to due process as it provides for more than just opening algorithmic tools; it makes them understandable to the general public and allows for “testable explanations of what the system is doing and why”.
Explainability is another lens through which to view our argument for the disclosure of executable versions. As we will show in what follows, disclosure of an executable version may add value to an explanation by providing a means by which it can be tested.
What is an executable version?
The term ‘executable version’ (EV) was used in the judgment of the Court of Appeal in R(Eisai) v National Institute for Health and Clinical Excellence [2008] EWCA Civ 438. Eisai is a pharmaceutical company whose Alzheimer’s drug was up for approval by the National Institute for Health and Clinical Excellence(NICE). As part of the approval process, NICE used an automated model in the form of an excel spreadsheet for assessing the cost-effectiveness of drugs. Eisai sought disclosure of an EV of this model. Eisai wanted the EV to run their own checks on the accuracy and sensitivity of the model. Running their own checks, in turn, would allow them to make informed representations on NICE’s decision.
Drawing on that judgment, an EV of a model can be defined as one that allows someone with access to it to: (1) change the inputs or assumptions of the model; (2) run the model; and (3) see the outputs. For the purposes of this discussion, we define a ‘model’ as the software programme, as opposed to the mathematical model implemented by the AI system on the one hand, or the instantiation of the software programme on a particular computer system on the other. By ‘version’ we mean a running copy of the software programme. In our view, the salience of an EV is that it allows someone to see and use the ‘front-end’ of the decision-making tool. It does not offer access to the ‘back-end’, and it is only a copy – it does not allow a third party to make changes to the system actually used by the decision-maker.
Disclosure of an EV will be especially important if the algorithmic tool uses machine learning and is a black box. In these cases, it will be impossible to disclose the rules or criteria applied by the algorithm. However, an EV would allow people to understand how changes to the inputs affect the outputs and thus generate their own ‘counterfactual explanations’. For example, we would be able to say, “If I earned more, my application would have been successful”. Even if it is a simple algorithm and the rules or criteria are disclosed, the EV will still be important to allow for testing the model’s accuracy and reliability. Indeed, this was the basis on which the Court of Appeal found in Eisai’s favour (see paragraph 49).
The argument for disclosure of an EV is buttressed by a comparison with written policies. A policy is a set of rules for a human decision-maker to follow in the exercise of their discretionary power. We expect—and the law requires—policies to be made public. In the case of Lumba v Secretary of State for the Home Department [2011] UKSC 12, Lord Dyson endorsed the statement that “it is in general inconsistent with the constitutional imperative that statute law be made known for the government to withhold information about its policy relating to the exercise of a power conferred by statute” (paragraph 36). We should not expect anything less when an algorithm is used. When rules are encoded in an algorithm, rather than written in a policy, transparency is equally, if not more, important.
The next question is: what information about an algorithm is equivalent to a written policy? Imagine a scenario where visa applications are determined by a human decision-maker in accordance with a written policy. The policy is published and states that the human decision-maker is to apply criteria X, Y and Z. If all the criteria are met, the application is accepted. If any one of the criteria is not met, the application is refused. If the policy is published, a third party could put themselves in the seat of the decision-maker and, for any given set of inputs, apply the criteria, and work out for themselves what the output would be.
Now imagine a second scenario where, instead, visa applications are determined by a machine learning algorithm, trained on historical data. Information about new applications is fed into the algorithm, and the algorithm determines whether the application is to be accepted or rejected. If an EV is published then, just like in the first scenario, a third party can change the inputs and see what the output would be. Unlike in the first scenario, a third party cannot see what criteria (X, Y, Z) are applied by the algorithm (indeed, it is wrong to think of a machine learning algorithm as applying a static set of criteria that could be understood by a human being). The third party can, however, run the EV on a range of different inputs and generate their own counterfactual explanations, which is, arguably, the next best thing.
Arguably, then, an EV is—for transparency purposes—the closest equivalent to a written policy because it allows an ordinary person to understand and test explanations of how discretionary power is to be exercised in a given case.
An EV has certain advantages over other popular proposals for enhancing transparency. Many researchers suggest using ‘post-hoc explanations’: after-the-fact explanations of each individual decision of a ‘black box’ model. However, Sebastian Bordt and others have argued that post-hoc explanations are inadequate because “most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends.” By contrast, an EV allows someone with access to it to generate their own counterfactual explanations, rather than having to rely on the explanations of the provider. If an explanation is given in addition to the EV, the EV could be used to test it and check its faithfulness to the actual software programme. While it would be possible to provide a deliberately altered or inaccurate copy of the software programme, this would require a much higher level of dishonesty and is, therefore, less likely.
A technical challenge in implementing this approach would be ensuring that the EV remains up-to-date, especially if the EV is a copy of a continuously evolving machine learning system. One option would be to develop human controls to check whether the EV is an adequate representation of the system in use or whether it needs updating. However, to avoid the manipulation problem described above, these controls would probably need to be operated by an independent regulator—the provider themselves may be tempted to ‘green light’ a version that has become outdated and is no longer representative. Another, perhaps better, option would be to a publish a copy that is linked to the system in use and that automatically updates in real time.
Addressing the counter-arguments
In the case of Eisai, NICE resisted disclosing the EV for two reasons: (1) confidentiality; and (2) concerns about the delays this would cause in the decision-making process, given that the disclosure of the EV could mean that it would take longer to make and respond to representations.
The second argument does not necessarily apply outside of the procurement context. In any case, the court in Eisai did not find it persuasive.
Regarding the first argument, it is important to recognise that the disclosure of an EV (on our definition) provides insight into the front end of an algorithmic tool. It does not give you the source code or programming. Admittedly, it may be possible for someone with technical expertise to make inferences about the back end, thus potentially giving rise to intellectual property issues. However, a recent case in the Italian courts suggests that the importance of meaningful transparency outweighs such concerns. The Lazio Regional Administrative Court held that algorithms used in public administration amount to “digital administrative acts” and, therefore, citizens have the right to access them. The Administrative Supreme Court, Italy’s highest court of appeal, went further and concluded that confidentiality cannot be used to lawfully oppose access requests filed by third parties, where said requests are grounded in the need to bring a remedy action against a prejudice allegedly caused by the algorithm. More specifically, the court found that intellectual property arguments should not apply where algorithmic tools are used by public authorities to carry out tasks that fall within their duties. The ruling ultimately stated that where an algorithm becomes part of an administrative decision-making process, transparency requirements automatically apply and there is a presumption that full knowability (or explainability) of the algorithm used, and the criteria applied, be disclosed.
There could be occasions where public bodies would be justified in refusing disclosure of the EV if it would, for example, pose a defined threat to national security. However, in our view, the default position should be disclosure, subject to any necessary and proportionate exemptions modelled on legislation like the Freedom of Information Act 2000. Moreover, the public interest in relying on any such exemption should always be balanced against the public interest in disclosure. Such a position would mirror exemptions to disclosure of the source code under the Canadian regime. To ensure that it is lawful and legitimate, reliance on any exemptions should be subject to scrutiny by an independent regulator.
Conclusion
So far, no compulsory transparency regime explicitly requires the disclosure of an EV. Though the move towards compulsory transparency is a very welcome development, it will not have the desired effect unless the information provided allows people affected by algorithmic decision-making to test how an algorithmic tool works and understand the reason behind the decision they received. In the Netherlands, investigative newsroom Lighthouse Reports were able to create an EV by reconstructing the algorithm previously used by Dutch municipalities in an attempt to stop welfare fraud. The algorithm profiles citizens on social assistance benefits and categorises them into risk groups, with specific target groups being rendered as potential fraudsters. Lighthouse Reports’ EV allows people to generate their own risk score. Meaningful transparency requires disclosure of EVs as the default. If governments are obligated to disclose EVs, we will have the opportunity for democratic consensus-building as to the appropriate use of new technologies, and we will be able to hold governments to account when things go wrong.
Suggested citation
Mia Leslie and Tatiana Kazim, ‘Executable versions: an argument for compulsory disclosure (part 2)’ (The Digital Constitutionalist, 03 November 2022). Available at https://digi-con.org/executable-versions-part-two/