Introduction
Facial Recognition Technology (FRT) is usually within the spotlight because of its widespread use as a mass surveillance instrument. In a nutshell, FRT is capable of matching the facial image of an individual against a database of such facial images, typically for identification or (identity) verification purposes. From its on and off deployment over the US, the hot debate on its banning within the framework of the EU proposal for an AI Act to its increasing use in Asia, the debate on FRT is almost as ubiquitous as the technology.
That being said, it comes as no surprise that FRT is playing a role in the Russia-Ukraine conflict. Therefore, one might question, will this crisis be the opportunity for such a technology to redeem and present itself as a tool for good?
The use of FRT within the Russia-Ukraine confrontation: the Clearview AI saga
On 22nd March 2022, a Wired piece described the use of FRT to identify Russian soldiers by European sleuths. According to Wired, FRT was among the techniques used by Open Source Intelligence (OSI) operatives to track and target Russian troops. Within such context, the use of Social Network Services (SNS) proved invaluable as a database to compare the face images from the conflict for OSI purposes but also in general.
Precisely, SNS are the main source for Clearview AI’s face image database. Clearview AI is a US company that offers facial recognition services. Its main clients are law enforcement agencies and, according to a leaked list of their clients disclosed by Buzzfeednews, include US Immigration and Customs Enforcement, several offices within the US Department of Justice and credentialed users were targeted by the FBI or Interpol. Since the company started to gain public attention, it has been in the spotlight, accused of privacy violations. Such accusations brought several lawsuits against the company in California and Illinois.
Across the EU, Clearview AI’s activities have also been the object of several parliamentary questions, including whether their services were in use by any Member State; if they hold any data of EU citizens and (in that case) how they had been processed; and whether they were consistent with the EU data protection regulation and the EU-US bilateral agreements on data protection. The accordance of Clearview AI’s practices with the EU data protection legislation has also been the object of a letter from the European Data Protection Board. They concluded that ‘the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime.’ In this line, Clearview AI’s services have been deemed illegal by the Hamburg DPA, the Swedish DPA, the Italian DPA and the French DPA.
In this scenario, only four days after the Italian DPA’s fined the company, Clearview AI’s services were reported to be offered to the Ukrainian government. According to a Reuters article, the intended purposes of the technology within the conflict included ‘uncovering Russian assailants, combat misinformation and identify the dead’, ‘vet people of interest at checkpoints’ or ‘help people reunite with their families’. Despite Reuters claiming that Ukraine was already using FRT echoing Clearview AI’s CEO, in an interview on 15th March 2022, Ukraine’s vice prime minister and minister for Digital Transformation, Mykhailo Fedorov, declared apropos Ukraine’s government partnership with Clearview AI: ‘the project is currently in very early development’. Later, the Washington Post informed that Ukraine’s ‘IT Army[…] says it has used those identifications to inform the families of the deaths of 582 Russians, including by sending them photos of the abandoned corpses.’ Clearview’s CEO also recalled ‘several “‘oh, wow’ moments” as the Ukrainians witnessed how much data […] they could gather from a single cadaver scan’.
FRT in conflict times: friend or foe?
While there are potential good uses for FRT in the current conflict, such as finding missing people and identifying dead people or war criminals, there are also limitations. First, the accuracy of FRT systems depends on the quality of the input data. Within a war scenario, the quality and conditions of the images taken will vastly differ from optimal conditions. Even if the addition of artificial intelligence has entailed a boost in robustness for FRT, the low quality of images, for instance, due to bad lighting conditions, might result in false positives or negatives. Based on this, I pinpoint three issues. First, more ethical than legal, a victim is incorrectly matched causing distress to a family that thinks their beloved one is dead when it might not. Second, the eventual prosecution (or killing) of wrongly identified Russian soldiers. Third, FRT’s accuracy is compromised by alterations in a person’s face and this has been admitted by Clearview’s CEO. Therefore, injured or deceased people (precisely the main group of people that the intended use of Clearview AI’s software aims to identify) might be not correctly matched by FRT.
Finally, the use of Clearview AI’s services comes at a high cost. It remains to be seen how and if Ukraine enters the EU and embraces the GDPR, the position this will entail for Clearview AI’s future use. Apart from the voiced privacy concerns, FRT systems are sensitive to algorithmic bias. Further, human decisions taken based on FRT’s outcomes might be prone to suffer from automation bias. In a crisis scenario such as the one we are discussing, bias might have crucial consequences. Having a tool of such capability at Ukraine’s hand might seem attractive, but one must not forget that if you play with fire, you might get burned. In this line, it should be taken into account that the International Committee of the Red Cross commentary on Article 13 of the Geneva Convention ‘recommends that images that identify prisoners of war […] are not published unless there is a “compelling public interest” in doing so’. In this line, the Red Cross has warned about ‘the appropriate use by Detaining Powers of electronic means to identify prisoners of war – including biometrics –’. Such appropriateness is quite the opposite of the abovementioned reported practice to inform the families of the deaths of Russians, as will entail psychological damage to their families.
Conclusion
FRT has the potential to become a valuable asset for better and for worse in conflict times. During war times FRT could help to identify victims, missing and dead people, and it could be used to identify potential targets. It could also lead to wrongful arrests, targets or misidentifying a dead body, with unreversible consequences for human dignity and fundamental rights in general. After the crisis, the consequences of such vast and indiscriminate data processing only might be predicted. Unfortunately, technology in general and FRT, in particular, cannot escape the general fate that any armed conflict entails. The use of FRT within the conflict will follow the same nature as the conflict itself: there will not be only good consequences of the use of FRT as there will not be good consequences of the conflict itself. Accepting Clearview AI’s offer by the Ukrainian administration will come at a great cost that will only crystallize when the conflict is over. But one thing can be already stated: there will be neither victor nor vanquished. Our aim should be, at this stage, to try to protect the victims of the conflict by preventing them from being also victims of the same technology that was supposed to help them.
Suggested citation
Natalia Menendez. ‘Does the end justify the means? Clearview AI and the use of Facial Recognition Technology within the Russia-Ukraine conflict’ (The Digital Constitutionalist, 5 May 2022).