1. Introduction: News outlets and FRT identification/verification
One year and a half ago, Natalia wrote a blog post about the use of Facial Recognition Technology (FRT) within the Russia-Ukraine war. There, she argued that the deployment of such a powerful technology could exacerbate some of the threats posed by the conflict itself. Unfortunately, history seems to follow a cyclic pattern, and we are now witnessing the disturbing and violent escalation of the ongoing Israeli-Hamas conflict, particularly after the attacks carried out by Hamas against the Israeli civilian population. Amid the harrowing flood of information regarding such attacks and the actions that followed, a piece by BBC Verify claimed to have potentially identified one of the attackers involved in the “massacre” at the Supernova Festival using FRT. As the article reads:
‘The BBC has analy[z]ed the footage and ran still images of the gunmen who were visible through a facial recognition tool. It matched one of the faces with images of a man in police uniform which were available on the website of Gaza’s Nuseirat municipality. We compared these through Amazon Rekognition software and got a similarity score of between 94-97% (some campaigners, however, have raised concerns that non-white faces can be falsely identified on facial recognition tools).’
According to Kashmir Hill, a tech reporter at the New York Times and author of a recent book about the US company Clearview AI, a “face search engine” was first used, yet the BBC article does not specify important information, including which software has been used or against which reference database the still images of the gunmen were compared to find a “match”. In this view, it seems to emerge that, in a second phase, the results from the initial search were further “verified” by using FRT, specifically Amazon Rekognition software. Similarly, on October 29, 2023, following a “viral tweet”, other members of the BBC Verify team claimed to have run ‘AI face recognition tests using several screengrabs from […] two high-res videos’ to “verify” whether the “two Palestinian women” featured in the recordings, portrayed as (1) Hamas supporters and (2) civilian victims in the two separate videos, were actually the same persons. They concluded that ‘[t]he women in the two videos […] are not the same’.
These events mark one of the first instances where a popular news outlet publicly discloses the use of FRT for autonomous investigations into such sensitive events. It is worth noting that this does not necessarily imply that other news outlets or actors have not used similar methods in the past; perhaps such practices are just not disclosed or known to the public at large.
2. Problem setting: OSINT, freedom of information, and human rights
The use of Open Source Intelligence (OSINT) tools, including FRT, by non-governmental organizations (NGOs) is nothing new, especially in war scenarios or for documenting and investigating human rights abuses. For example, in the past, BBC Africa Eye has played a crucial role in investigating atrocities and arbitrary executions in Cameroon, starting with the forensic analysis of video footage circulated online. In that case, these actions helped in identifying the perpetrators – a group of Cameroonian soldiers – who, after initial resistance by the local government, were eventually arrested and brought to justice. However, the role and use of FRT in the current case seem quite different and raise several questions.
In fact, the primary objective of news providers is to inform, bringing issues of general interest to the attention of the public, thus also empowering the exercise of parts of the right to freedom of expression – i.e. freedom to receive and access information. It is precisely to guarantee ‘rigorous editorial standards’ in the light of the potential AI and digital technologies have in ‘turbocharg[ing] the impact and consequences of disinformation’ – particularly after the war in Ukraine – that the BBC shared with its readership the launch of its “Verify Unit”. The latter is committed to ‘fact-checking, verifying video, countering disinformation, analy[z]ing data and – crucially – explaining complex stories in the pursuit of truth’. However, at this stage, the potential benefit of merely asserting the possibility of having identified one of the attackers involved in the “Supernova Festival attack” remains unclear and might also establish a dangerous precedent from multiple perspectives.
For instance, one might argue that the announcement of having – probably – traced the identity of one of the “gunmen” involved in this attack leaves the actions they committed just as horrible as they are. It does not in any way ameliorate the suffering caused by these actions to those involved, their families, and a social context already deeply compromised. On the contrary, not necessarily referring only to the case at hand, such practices could potentially fuel the spread of dangerous mechanisms. In this sense, one of the fundamental functions of every criminal law system – including the international one – is to limit impunity and ensure justice by preventing further violence and private vengeance. This process also aims to “restore the integrity” of the legal and social orders that have been violated by any criminally relevant conduct. We already know that many AI systems, including FRT, can serve as a force for good, serving ‘as a crime-solving tool or contribute to [further] victimi[z]ation; like in cases of harassment, stalking, or doxing’ – depending on end users’ practices. In other words, the accessibility and ease of use of many tech tools, without proper guidelines and standards of use, could put at risk the security of many individuals, perhaps erroneously associated with terrorist or criminal acts.
Even when used by domestic Law Enforcement Agencies (LEAs) and in cases significantly less controversial and covered by the media, it is almost unanimously accepted that FRT presents a wide range of issues from an ethical, legal, and fundamental rights perspective. Therefore, its use should be subject to strict controls and safeguards, as several stakeholders argue for strong regulation. In this context, one might wonder whether the use of FRT by a news outlet only for informative purposes is in any way desirable. Additionally, one could consider the fundamental rights implications against basic guarantees that, according to international human rights law, should not be denied to anyone – including alleged terrorists. This concept has recently been reaffirmed by the UN Special Rapporteur on Human Rights and Counter-terrorism, Ben Saul, who has just begun his mandate in such a critical moment.
3. Good intentions, questionable results: What about UK data protection law?
The efforts of forensic and investigative journalists and experts, whether working for NGOs or news outlets – or even that of amateurs – have proven crucial to counter mis- and disinformation in many instances. Particularly in the AI era, they help public authorities, human rights bodies, and courts at different levels. The cooperation of “third actors” in investigating situations giving rise to significant social concern grants possibilities for more ‘inclusive adjudicative process[es]’. And ‘digitalization has a positive, democratizing potential’. Earlier this year, the Office of the Prosecutor of the International Criminal Court (ICC) launched Project Harmony, a digital platform designed to ‘collect, store, preserve, analy[z]e, review, increasing quantities of complex evidence’ using AI-powered tools, like FRT.
Yet, when analyzing the case at hand from a data protection perspective, there are at least a few entry points to consider. For instance, for those who believe that the identity of the attackers involved in the “Supernova massacre” or certain aspects related to their identity – such as their political affiliation or ethnic background – are newsworthy, it should be considered that this data, as well as the processing of biometrics itself, is considered particularly sensitive even under the 2018 Data Protection Act. Theoretically, any UK-based organization would be required to comply with its provisions for data processing.
If one were to find relevant arguments to dismiss similar claims, it could be sustained that the images analyzed were obtained from footage recorded by a dashcam, and part of this footage was subsequently uploaded on social media, following the attack. It could thus be argued that this data was “openly accessible”. Moreover, face image search engines like Amazon Rekognition and analogous services can be either purchased or publicly available on the Internet.
Nonetheless, it is worth noting that a similar line of reasoning is used by Clearview AI to justify the scraping of facial images from the web and extract from them corresponding biometrics – including of EU and UK data subjects – without any adequate legal basis. This may well be cause for concern as – although on different grounds – the First-tier Tribunal General Regulatory Chamber (UK FTT) has just recently granted Clearview AI’s appeal against an order previously issued by the Information Commissioner’s Office (ICO). Furthermore, regarding the role of news outlets as data processors within the course of their activities, in the US Renderos et al v. Clearview AI et al case, the FRT company has contended that the whole process of developing and marketing its software — from scraping data to creating faceprints and delivering the app to customers— is protected by the First Amendment. If the Court were to accept this argument, a very similar one could be sustained by news outlets using FRT for information purposes, even in different (legal) contexts.
4. Conclusions: With great power comes great responsibility
These events offer the possibility to reflect on the urgent need for establishing precise rules for FRT uses in various circumstances. The fact that while Data Protection Authorities (DPAs) and regulators at different levels are struggling to protect data subjects from web scraping practices and AI-based transboundary violations of fundamental rights originating in third countries, European entities essentially engage in similar practices – albeit with good intentions and in different contexts – has a faint taste of “Eurocentric double standards”. Furthermore, it once again demonstrates how vulnerable we still are before similar dynamics due to the lack of sufficient deterrent effect of current regulations, shortcomings in monitoring, and jurisdictional constraints in adjudication and enforcement. OSINT instruments are today within anyone’s reach. They can either be a force for good or interfere with the privacy rights of individuals or entire communities. When used by private actors attempting to collaborate in “solving” serious crimes or complex and controversial issues, as in the case at hand, there are risks associated with ‘methodological and technological blind spots’, including the technical immaturity of the systems used and possible interferences given by racial, gender, or automation biases. Moreover, there are serious risks of supporting forms of “private justice” and encroaching on the fundamental liberal guarantees that should characterize criminal justice systems in democratic settings.
The views expressed within this blog post are solely of the authors and do not necessarily represent those of their employers or the organizations with which they are affiliated, including The Digital Constitutionalist.