HomePostsSymposiaFailing where it matters most?

Related Posts

Failing where it matters most?

Reading Time: 8 minutes
Print Friendly, PDF & Email

The central claim of this intervention is straightforward: while the EU AI Act entails a promise of enhanced accountability, oversight and transparency for AI systems developed or deployed in the EU (and particularly those used in ‘high risk’ contexts), these standards are significantly diluted in the sphere of law enforcement, border control, migration and asylum – precisely where they matter most. 

Experiments with critical consequences

In the enhanced enforcement, securitization and militarization of borders, new digital technologies are perceived by EU authorities and member states to play a pivotal role. In the last few years, EU DG Home commissioned a strategic report on the Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security (with Deloitte); Frontex commissioned a study on How AI can support the European Border and Coast Guard (with RAND Europe); and the EU Innovation Hub for Internal Security was created (housed with EUROPOL and striving for innovative security technologies, including the use of AI in practices of risk assessment at the border). The emergent use cases that are being tested and deployed, scholars and activists have observed, transformed the border into a technological testing ground. A space where often obscure and unregulated digital technologies extract data from and target refugees, migrants, stateless persons and others without appropriate legal protections or avenues of contestation. A space of experiments without protocols that allows for significant private power and profit. A space of expanding feature spaces and new modes of classification and racialization.

Encouraged by institutions of global security governance, from the UN Security Council to the GCTF, the EU and several EU member states have also created, developed and coordinated watchlists using artificial intelligence to identify not only known but also unknown terrorists. The EU itself, together with other member states, has participated in the elaboration of the GCTF Counterterrorism Watchlisting Toolkit. By doing so, without making sure that the toolkit contains basic human rights checks and principles (such as the principles of necessity and proportionality), the EU has agreed to normalize and stabilize problematic pre-emptive, exclusionary, and highly discriminatory security practices. Besides, and although the extent to which data collected in watchlists are used in criminal proceedings, several criminal cases at domestic level reveal that individuals are sometimes prosecuted and sentenced on the sole basis of their suspicious cyber activities and digital communications. 

Who is affected, harmed and oppressed by high-risk AI systems used for security purposes? As Tendayi Achiume, Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance highlights in her Racial discrimination and emerging digital technologies report, such systems ‘potentially lead to discrimination against or exclusion of certain populations, notably minorities along identities of race, ethnicity, religion and gender”. Because they rely on predictive models that incorporate historical data, these systems perpetuate marginalisation of racialized communities; they are promises that the world does not change

The EU AI Act (and how it has been diluted)

It is in relation to these problematic developments that we can understand a range of important regulatory initiatives and judicial interventions, of which the AI Act is one (we can also think here of the important recent CJEU Judgment in Ligue des Droits Humains – discussed also on digi-con). The EU AI Act is the first cross-sector law on AI and machine learning by a major regulator. It will enter into force once both the Council representing the 27 EU Member States and the European Parliament agree on a common version of the text. On 3 November 2022, the Czech presidency of the EU Council shared with the other EU countries a new (final?) compromise text of the AI Act. It is on this basis that we conduct our analysis. 

It is important to note, first of all, that the AI Act recognizes that the ‘use of AI in the context of migration, asylum and border control management affect people who are often in particularly vulnerable position’ (Recital 39), and therefore explicitly qualifies these use cases as ‘high risk’. The ‘accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts’, the Recital states, ‘are therefore particularly important’. Specific reference is made to the use of emotion AI (such as those deployed in the notorious iBorderCtrl pilot project), and AI-based risk assessment in practices of border control or in decision-making on visa and asylum applications.

Yet, while the qualification of these practices as ‘high-risk’ raises particular regulatory expectations (aimed, for example, at risk management, record-keeping, transparency and human oversight), these standards – as others also argued – fall far short of containing the legal and political problems associated with the use of AI in this domain. In this post, we focus specifically on three important exemptions for law enforcement and border control agencies, through which important safeguards for transparency and human oversight cease to apply. The observation that two out of these three exemptions were not part of the initial proposal for an AI Act by the EU Commission shows how successful states have been in pushing back against the regulation of security technologies.

The first, and perhaps most important, carve-out relates to the obligation for providers and users of ‘high-risk’ AI systems to register in the newly created EU database. The new compromise text indicates (in Recital 69) that in order to ‘increase the transparency towards the public, providers of high-risk AI systems … should be required to register themselves and information about their high-risk AI system in a EU database’ (see Articles 51 and 60). Additionally, ‘[b]efore using a high-risk AI system … public authorities … shall register themselves in the EU database … and select the system that they envisage to use’. Crucially, the ‘[i]nformation contained in the EU database … shall be accessible to the public’. In other words, public information should be available on both the ‘high-risk’ systems themselves and the public authorities deploying those. 

Yet, in the new compromise text, a crucial exception is introduced: the need for ‘transparency towards the public’ does not apply in ‘areas of law enforcement, migration, asylum and border control management’, where both the registration obligation to providers of ‘high-risk’ AI systems as well as the public authorities deploying those ceases to apply (revised Article 51). In the sphere where transparency is most needed – where technologically mediated state violence is wielded – the public will be kept in the dark. This critical carve-out was not part of the initial AI Act proposal by the Commission.

A second key question is the impact of the AIA on third country partners in the security space. In principle, the AI Act applies to all providers and users of AI systems – also those established in a third country – whenever the ‘output produced by the system is used in the EU’ . This is essential to ‘ensure an effective protection of natural persons located in the Union’ (Recital 11), and it is a key element of the aspired ‘Brussels Effect’. However, when third country public authorities act ‘in the framework of international agreements … for law enforcement’, the AI Act does not apply. Third country security partners sharing the output of AI systems (which the AI Act might deem illegal) remain outside its scope. Acknowledging the troubling nature of this situation – where the ‘effective protection of natural persons’ is jeopardized by the actionable output of AI systems that are deployed by third country public authorities – the compromise text states that ‘[w]hen those international agreements are revised or new ones are concluded … the contracting parties should undertake the utmost effort to align those agreements with the requirements of this Regulation’.

Finally, when AI is used for remote biometric verification (facial recognition) – which is allowed by the AI Act under certain conditions set out in Article 5 – no action can be taken unless data is ‘separately verified and confirmed by at least two natural persons’. This is the famous ‘four-eyes principle’, which is deliberately introduced because of the ‘significant consequences for persons in case of incorrect matches’ (see Recital 48). Yet, precisely where these ‘consequences’ are the most dramatic, the principle disappears: ‘the requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum’ (Article 14, para. 5). We again observe that this important exception was absent from the initial proposal by the Commission and crept in the compromise text as a result of political negotiations.

It is where AI has the most severe effects and targets the most vulnerable that legal safeguards on transparency and oversight are lacking. While exceptions to accountability mechanisms have always been invoked in the security realm, there is no clear, let alone convincing, rationale for excluding these regulatory measures of the EU AI Act from the realm of security. Indeed, none of these measures – from the application of a registration obligation, to the extension of the scope of the EU AI Act to third country partners when the outputs of AI systems are used in the EU, or the requirement that two persons look for incorrect matches produced by high risk AI systems – would fundamentally impair the protection of national security. They would merely extend the ‘effective protection of natural persons’ and ‘transparency towards the public’ – to use the language of the AI Act itself – to the sphere where technological experiments are most prolific and generate some of the most troublesome corporeal, social, legal and political effects.

(The Limits of) Transparency as Traceability 

This does not imply that stronger transparency safeguards would suffice to confront the profound problems posed by practices of algorithmic governance in the security space – the exacerbation of racial violence and exclusion, the foreclosure of political futures, the displacement of legal publics or the capacity of making collective claims to rights. While some argue that the desire for visibility and legibility – for ‘opening the black box’ – is often impossible or misguided, others assert that ‘asymmetries of power can also be enacted or even intensified through claims for transparency’. Silvia Rivera Cusicanqui, in this sense, powerfully warns us about the saturation of discursive and policy spaces with easy fixes and answers to oppressive systems. This blog is not yet another call for the optimization of biased algorithms, or for ethical, inclusive, or transparent AI. Our intention is to recognise that concrete avenues for contestation in the digital world is conditioned upon transparency, understood as traceability. 

Transparency as traceability does not need to be defined as full visibility and openness. Rather, it can involve the mere possibility to access and mobilise fragmentary, imperfect, or incomplete information. In Amoore’s terms, ‘to follow a thread or a trace is thus not comparable to an opening of scientific black boxes, for it does not seek a moment of revelation and exposure’. The ability to follow a trace or a thread, for instance, can materialise as the capacity to identify the providers and users of ‘high-risk’ AI systems, trace their strategies and practices, and render those intelligible. 

The elements of the AI Act that we mentioned above (and that do not apply to the security realm in the current version) could provide possibilities for such traceability. As Ruha Benjamin argued, transparency mechanisms are essential to contestation and collective agency, and hold the potential of disrupting power relations in the age of governance by data. D’Ignazio and Klein also imagine modalities to challenge algorithmic modes of governance. They focus on the audit of opaque algorithms, to hold institutions accountable and ‘push back against existing and unequal power structures’. Such audits are severely limited if the providers and users of ‘high-risk’ AI systems in the fields of ‘law enforcement, migration, asylum and border control management’ are exempt from even the most basic standards of registrations and human oversight. 

While calls for algorithmic and data audits or impact assessments burgeon, there is in any case little to expect from such accountability tools if ‘high-risk’ AI systems used for security purposes operate in an ‘unregistered’ dark space. Transparency might be of limited support and yet, it is key to concrete modes of contestation, in the form of audits or strategic litigation for instance. The EU AI Act could have provided an opening to precisely such forms of contestation – an opening that the latest compromise text has closed.

Suggested citation

Dimitri van den Meerssche and Rebecca Mignot-Mahdavi, ‘Failing where it matters most? The EU AI Act and the legalized opacity of security tech’ (The Digital Constitutionalist, 22 December 2022). Available at https://digi-con.org/failing-where-it-matters-most/

DVD
Dimitri van den Meerssche
RM
Rebecca Mignot-Mahdavi
[citationic]

Featured Artist