Algorithmic transparency in the UK
Increasingly, governments use algorithmic tools to make life-altering decisions. In the UK, such tools are used in the context of benefits fraud investigations, sham marriage investigations, visa applications, and more. Yet, in Public Law Project’s experience, it is very difficult to find out much about them. Detailed information is rarely published, and requests under the Freedom of Information Act 2000 (FOIA) are often refused or only partially answered. This undermines public trust and makes it virtually impossible to assess if these tools are being used lawfully and fairly.
Knowing what algorithms are being used—and even being able to see the code—would only solve half the problem. Most of us would be none the wiser for looking at the ‘back end’ of an algorithmic system. That is why meaningful transparency requires the public to have access to executable versions of algorithms that affect them.
For now, the UK’s Cabinet Office (the government department responsible for supporting the prime minister and the Cabinet) is piloting an ‘Algorithmic Transparency Standard’ (ATS). The ATS asks public sector organisations across the UK to provide information about their algorithmic tools. It divides the information to be provided into tier 1 and tier 2. Tier 1 asks for high-level information about how and why the algorithm is being used. Tier 2 information is more technical and detailed. It asks for information about who owns and has responsibility for the algorithmic tool, including information about any external developers; what the tool is for and a description of its technical specifications (for example ‘deep neural network’); how the tool affects decision making; lists and descriptions of the datasets used to train the model and the datasets the model is or will be deployed on; any impact assessments completed; and risks and mitigations.
Even to meet the minimum viable standard of transparency, this does not go far enough. At present, it is not compulsory for public sector organisations to engage with the ATS. This does not appear likely to change in the near future. In its response to the consultation ‘Data: a new direction’, the government stated that it “does not intend to take forward legislative change at this time”, despite widespread support for compulsory transparency reporting, and, indeed, the recently published Data Protection and Digital Information Bill does not include any such requirements. Moreover, even if it were placed on a statutory footing, the ATS does not ask for sufficient operational detail to individuals properly to understand the decision-making process to which they are subjected.
Algorithmic transparency in other jurisdictions
Compulsory transparency regimes have already been introduced in several other jurisdictions.
Agencies of New York City are required, under Executive Order 50, to report and make publicly available information about their ‘high-priority’ algorithm tools. ‘High-priority’ tools are those that use ‘complex data analysis approaches’, support agency decision-making, and have a ‘material public effect’. Publicly available information about such tools includes the name of the agency using the tool, the name of the tool and date of use, and a narrative description of the tool’s purpose and how it aids agency decision-making.
The Canadian Directive on Automated Decision-Making (the Directive) applies to a wider, though still limited, pool of authorities. It requires federal institutions to disclose information about automated decision-making (ADM) systems used to recommend or make administrative decisions about an individual. There are three disclosure requirements that apply to all ADM systems. Operators are required to i) disclose the components of the ADM system, ii) disclose the source code—subject to certain exemptions—and iii) document decisions made by the ADM systems for monitoring and reporting. Under the Canadian regime, the level of transparency depends on the level of expected impact on individuals, communities and ecosystems, assessed using an Algorithmic Impact Assessment tool. When a higher impact system is used, notice must be given and a meaningful explanation provided to affected individuals as to how and why the decision was made in this way. Systems that fall higher on the impact scale require more detailed disclosure, including information about how the components work; how the algorithm supports the administrative decision; results of any reviews or audits; and a description of the training data, or a link to the anonymized training data if this data is publicly available.
In France, the Loi pour une republique numérique (Law for a Digital Republic) mandates transparency of government-used algorithms where the algorithmic processing is “the basis of individual decisions that impact citizens’ life”. Like the Canadian regime, the French regime requires those implementing ADM systems to provide notice that a decision is made or supported by an algorithm. But the regime goes further and includes a requirement to publish the “rules defining” the “algorithmic processing”, the “main characterisitcs of its implementation”, and the purpose of such processing. Further still, if requested by the person concerned, the implementing authority must also disclose the extent to which the algorithm contributed to the decision-making process, the data processed by the system (including its sources), and the processing criteria and their weighting.
The purpose and meaning of algorithmic transparency
To evaluate existing transparency regimes, there needs to be a clear and shared understanding of the meaning and purpose of algorithmic transparency.
When it comes to public decision-making, transparency has intrinsic value—we have a right to know how we are being governed. Transparency has consequential value, too. It facilitates democratic consensus-building about the appropriate use of new technologies, and it is a prerequisite for holding government to account when things go wrong.
The purpose of transparency bears on its meaning in the context of ADM. The Berkman Klein Center conducted an analysis of 36 prominent AI policy documents to identify thematic trends in ethical standards. They found there is convergence around a requirement for systems to be designed and implemented to allow for human oversight through the “translation of their operation into intelligible outputs”. In other words, transparency requires the ‘translation’ of an operation undertaken by an ADM system into something that the average person can understand. Without this, there can be no democratic consensus-building or accountability.
Another plank of meaningful transparency is, in our view, the ability to test explanations of what an algorithmic tool is doing. One organisation articulated this as a need for “clear, complete, and testable explanations of what the system is doing and why”. Without testability, people affected must simply accept the explanations provided by the government. Because technological literacy in government tends to be (relatively) low and reliance on assurances provided by developers tends to be high, a lack of testability will likely lead to an accountability gap.
In summary, meaningful transparency requires that people lacking specific technical expertise—i.e., the vast majority of us—are able to understand and test how an algorithmic tool works.
If this is what transparency requires, then existing transparency regimes fall short—even those with more extensive requirements like the French regime. For example, the Open Government Partnership noted that under the standards created by the Loi pour une republique numérique, “agencies are struggling to fulfil [the] requirement[s], partly because there is a lack of guidance about how to inventory algorithms”. It further noted that “even when agencies are following this law, it doesn’t mean that the information is immediately of use as the average person may lack the technical knowledge to understand or respond to it”.
Similarly, neither tier 1 nor tier 2 of the UK ATS requires sufficient operational details for individuals properly to understand the decision-making process to which they are subjected. At tier 1, organisations are asked to explain ‘how the tool works’, but nowhere is there a reference to any criteria or rules used by simpler algorithmic tools. At tier 2, a ‘technical specification’ is requested, but this appears to mean nothing more than a brief descriptor of the type of system used, e.g. ‘deep neural network’.
In Part Two, we will argue that meaningful transparency requires disclosure of an executable version of the algorithm.
Suggested citation
Mia Leslie and Tatiana Kazim, ‘Executable versions: an argument for compulsory disclosure (part 1)’ (The Digital Constitutionalist, 03 August 2022). Available at https://digi-con.org/executable-versions-part-one/