HomePostsDigital RightsWhy the UK Artificial Intelligence whitepaper could erode existing legal protections

Related Posts

Why the UK Artificial Intelligence whitepaper could erode existing legal protections

Reading Time: 4 minutes
Print Friendly, PDF & Email

In March 2023, the UK government released a whitepaper which lays out a blueprint for regulating Artificial Intelligence (AI). The whitepaper is called “a pro-innovation approach to AI regulation.” If the UK government proceeds to implement the whitepaper, then there is a real risk that the adopted framework could undermine existing legal protections. This issue has not received full attention yet. Michael Birtwistle, who is affiliated with the Ada Lovelace Institute, noted that the proposed approach to regulation leaves “significant gaps” given the scale and urgency of the risks. In this post I will argue that more attention needs to be paid to how the proposed framework can undermine existing legal protections.

1. What is the overarching approach of the whitepaper?

The whitepaper acknowledges that it does not fully address all the societal and global challenges which AI poses. Its main aspiration is to create a business environment which is attractive to technology companies. This environment should also enable the regulators to address the societal risks of AI. The whitepaper considers addressing such risks as crucial for supporting innovation. The general public will be more willing to use AI if it has trust in this technology.      

The regulatory framework embodies a principles-based approach. The regulators will have a statutory duty to have due regard to the principles when issuing guidance to organisations. The guidance will inform organisations about the best practices they can adopt to implement the principles into their organisational practices and product design. The principles are: 1) safety, security and robustness, 2) appropriate transparency and explainability, 3) fairness, 4) accountability and governance and 5) contestability and redress. Regulators can flesh out the principles in their guidance to organisations and develop more principles. The framework is designed to achieve a proportionate approach to regulation. The regulators will apply these principles alongside the requirement of proportionality in order to address the risks posed by AI. Regulators can notify the government that there is a need for legislation. 

The Whitepaper defines the principle of fairness very narrowly. The whitepaper defines the principle of fairness along the lines of one strand of the rule of law. One element of the rule of law is that everyone needs to abide by the law. In other words, implementing the principle of fairness means compliance with existing legislation. This definition contrasts with the general understanding of fairness as a broad and substantive concept which relates to achieving equitable outcomes. The definition of the principle of proportionality is unclear and contradictory. The whitepaper does not define the principle of proportionality. Instead, it contains multiple different explanations about how regulators will apply the principle of proportionality.

In one instance, the whitepaper explains that regulators will need to balance the risks and opportunities of using AI. Under this approach, an intervention is needed if the societal risks posed by AI outweigh the benefits of innovation. The regulators will need to respond to risk in an effective and proportionate manner. In another section, the whitepaper defines proportionality as “avoiding unnecessary or disproportionate burdens for businesses and regulators.” (Whitepaper, 21) Under this definition, the regulators should not intervene if the application of the principles will create a disproportionate harm to innovation. The whitepaper does not mention what benefit one balances against the harm to innovation. Interpreting the benefit as avoiding societal harm makes it possible to create a degree of coherence between the two definitions of proportionality.

The two definitions of proportionality contradict one another. While the first definition requires the regulators to prioritise avoiding the harm to society AI may pose, the second      requires the regulators to put innovation first. The white paper equates innovation with the interests of the businesses in its definitions of proportionality. The use of contradictory definitions creates challenges in evaluating the impact of the whitepaper. A solution is to examine each of the impacts that will result if the regulators apply particular definitions of terms such as proportionality.

2. Why will the whitepaper undermine the protection of legal rights?

There are at least two reasons why the framework in the whitepaper could undermine the protection of existing legal rights. On the one hand, the principle of fairness does not take into account the fact that AI creates a hurdle for some existing legislation to protect the rights of individuals adequately. The experts advising the Council of Europe concluded in the Feasibility Study that existing domestic legislation and international treaties cannot fully secure the protection of fundamental rights due to the advent of AI. That said, scholars such as Sandra Wachter put forward how the existing conception of harm can be rethought and how the legal provision prohibiting discrimination can be interpreted in light of this concept of harm so as to close the gap in protection. Because the framework the whitepaper advances does not envisage the government mapping the full range of risks which AI poses and evaluating what gaps in legal protection exist due to the deployment of AI, it leaves existing gaps in legal protection intact.

On the other hand, the joint application of the principles of fairness and proportionality exacerbates the problem of the erosion of existing legal protections. The framework requires regulators to consider whether compliance with existing legislation is likely to create a disproportionate negative impact on businesses. If the answer is yes, then under the whitepaper’s approach the regulator will have to evaluate how to balance the two competing requirements using their expertise and judgment. They may need to prioritise the application of one requirement over another.

The regulators will have numerous choices. First, they can prioritise legal compliance over innovation. They can do so if they treat the principle of proportionality on par with other principles rather than as an overarching approach underpinning the regulatory framework. What is more, the regulators will need to emphasise some parts of the whitepaper over others. Alternatively, regulators can choose innovation over legal compliance. Finally, the regulators can apply the legislation partially or in a minimal fashion as part of fostering innovation. None of these approaches are satisfactory. The whitepaper puts the regulators in a position where they need to interpret an incoherent and contradictory framework. The three approaches to interpreting the regulatory framework do not address the issue of AI rendering existing legal protections ineffective. As a result, the whitepaper undermines the legal protection of rights.       

Conclusion

The framework in the whitepaper is problematic because it opens the door for restrictive interpretation of the law, partial compliance with legislation, and even disregarding the need for complying with legislation. The negative impact on the protection of legal rights is exacerbated by the fact that some domestic legislation no longer confers full legal protection due to the deployment of AI. The government needs to give this issue urgent attention.

Tanya Krupiy
Lecturer in law at Newcastle University

Tetyana (Tanya) Krupiy is a lecturer in law at Newcastle University. She researches how society can govern new technologies in a manner which advances social justice. Tanya received funding from the Social Sciences and Humanities Research Council of Canada to carry out a postdoctoral fellowship at McGill University in Canada. She has published with various publishers including Oxford University Press, University of Toronto, University of Melbourne, European University Institute in Florence, Elsevier and Brill.

[citationic]

Featured Artist