Facial Recognition Technology (FRT) compares one or more facial images of a person with a database of facial portrays to provide a percentage of the likelihood of the identity of an individual. This technology has been in the public eye for the past years for its potential to become an instrument for algorithmic biometric mass surveillance. Then, such potential was one of the main reasons why a wide range of stakeholders was eagerly waiting for the regulation of biometric systems in general and FRT in particular at the proposal of an EU AI Act. However, this attention, although necessary, has shifted the focus from equally important matters on the subject. Therefore, this blog post aims at discussing overlooked core notions of biometric technologies that play a relevant role in this regulatory debate.
The AI Act as far as FRT is concerned
In April 2021, the European Commission published its AI package, a set of documents whose jewel of the crown was the Proposal for an AI regulation laying down harmonisation rules on artificial intelligence (AI Act hereinafter). According to the European Commission, such a proposal aimed to promote investment and development of trustworthy AI technologies within the EU without overlooking fundamental rights and the rule of law. According to its first article, the AI Act aims to regulate the placing on the market, putting into service and use of AI systems within the EU; prohibits certain AI practices; establishes certain requirements for high-risk AI systems and, consequently, obligations; harmonises transparency rules for certain AI systems including biometric categorisation ones, and applies rules on market monitoring and surveillance. The piece of legislation is aimed at AI providers and users.1Article 2 AI Act.
One of the main distinctive characteristics of the AI Act is its adoption of a risk approach. According to such an approach, the regulation defines four levels of risk in AI (unacceptable, high, limited and minimal/none). Depending on the level of risk it poses, an AI system will comply with a different deployment regime, going from banning to assessments, certification schemes or blanket deployment. All remote biometric identification systems are considered high risk, including the use of remote biometric identification systems for non-law enforcement purposes or their non-remote or post-remote use by law enforcement authorities. Finally, real-time remote biometric identification systems in publicly accessible spaces are considered an unacceptable risk, with some exceptions for law enforcement purposes.2Article 5 AI Act.
Biometric terms within the AI Act: friends or foes?
The AI Act recalls the same definition of “biometric data” as the GDPR3Article 4, para. 14 GDPR and the LED.4Article 3, para. 13 LED These data are defined as
personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data
The definition suggests that to be considered “biometric”, personal data need to satisfy a set of criteria. First, biometric data require a “specific technical processing”. The GDPR does not clarify how this should be interpreted: recital 51 of the same regulatory instrument helps in the interpretation through an example, specifying that the “processing of photographs should not systematically be considered to be a processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person”. The expression “specific technical means” could be interpreted as the creation of a ‘biometric template’, i.e. the extraction of the information from the raw biometric data source (for instance, extracting facial measurement from a picture). State of the art FRT, in the application of several AI techniques, has made the concept of “template” practically obsolete. The extraction of the information from raw biometric traits takes place through automated processes. In any case, it remains unclear how the expression should be interpreted and if the technical processing that is a prerequisite for the comparison, such as mere storage in databases or the transformation into a digitised representation of the traits, could fall under the definition.
Second, the expression “which allow or confirm the unique identification” is more controversial, and the subject is equally obscure within the regulation. From a scientific point of view, biometric data allow us to distinguish human beings and recognise them to a certain degree, depending on the distinctiveness of the biometric characteristic selected, the type and quality of the data used and the modality considered (e.g. remote, post-remote or real-time). Differently from other identifiers, biometric data do not give a clear-cut identification when applied. Actually, the recognition of individuals through biometric data is always associated with a certain degree of uncertainty. According to an established view, biometric data should be considered as such “even if patterns used in practice to technically measure them involve a certain degree of probability”. The expression could be interpreted as referring to the processing of biometric data for identification or verification purposes, i.e. for biometric recognition purposes. Based on this reading, data processed for purposes other than recognition (such as categorisation/face analysis) would be excluded from the concept of biometric data.
About that, the recent amendments to the AI Act by the Council of the European Union proposed to remove the reference to a “unique identification” from the definition in the AI Act.5Recital 33 of the Council’s text In this way, the meaning of biometric comparison could be interpreted as a percentage of compatibility between data and not a “unique identification”. Furthermore, the interpretation of biometric data would be broader than the GDPR’s definition, including the reference to data processed for purposes other than recognition (such as categorisation). It should be noted that the recent amendments by Committee on the Internal Market and Consumer Protection Committee on Civil Liberties, Justice and Home Affairs do not consider these aspects.
The draft report introduces the further concept of “biometric-based data”, i.e. “data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person, such as facial expressions, movements, pulse frequency, voice, key strikes or gait, which may or may not allow or confirm the unique identification of a natural person”. 6Article 3, para. 1, 33. These data would be different “signals” of a natural person, such as facial expressions, movements, pulse frequency, voice, key strikes or gait, with the same ability—of some of these—to uniquely identify a subject. What is the effective difference between “biometric data” and “biometric-based data”? The necessity to introduce such a partially overlapping definition remains unclear and contributes to a perpetual puzzling scenario. We (the authors) might only suggest that the regulator is referring to biometric data to be used for categorisation purposes or soft biometrics. “Soft biometric traits are physical or behavioral features which can be described by humans. Height, weight, hair color, and ethnicity are common examples of soft traits: they are not unique to the individual but can be aggregated to provide discriminative biometric signatures”. Therefore, soft biometrics only are not suitable to uniquely identify an individual per se, and because of this, they could partially fit within the definition.
Additionally, the AI Act introduces the concept of “remote biometric identification”, defined as
an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference data repository, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used.
Recital 8 AI Act
In the Slovenian Presidency’s compromise text on the AI Act, the Council proposed to delete the adjective “remote” from the expression “biometric identification” due to the unclarity of the notion. The current interpretation would thus include all biometric recognition systems, including those involving direct contact between a capture sensor and recognition recipients.
Conclusions
The use of specific terms concerning biometric technologies in general and FRT, in particular, is ‘inherited’ from the GDPR (and the LED) by the AI Act. This has advantages, such as cohesion and uniformity between the acquis communautaire, but also disadvantages. The perpetuation of vague and non-scientifically precise terms might end up in the future compromising the effective applicability of the AI Act. Being still in the drafting process, we believe the time to address such questions should be sooner than later.
Further, the notion of biometric data is neither clear nor accurate to the state of the art of FRT, including the introduction of deep learning techniques that have rendered the concept of biometric templates obsolete. In this same line, concepts such as ‘technical processing’ and ‘unique identifier’ need further explanation within the regulation to avoid arbitrary interpretation. Likewise happens with the notion of biometric-based data. Such a vague concept that partially encompasses biometric data, categorisation traits or soft biometric traits altogether, but none of them fully, can make no good for a seamless and effective application of the regulatory instrument.
As a consequence of the terminological ambiguity, it is not clear whether categorisation applications which extract certain features of a subject from a facial image are considered for processing biometric data. Depending on the interpretation of the term ‘unique identification’, it can be considered one way or another. This will influence the regulatory regime applicable to such applications, with a severe impact on fundamental rights potentially affected by the use of these systems.
Therefore, we consider that biometric scientists and regulators should work hand in hand. A regulation only will be effective if it takes into account the state of the art and substance of the regulatory object. Thus, technology providers and the people in charge of who and how can use it must join forces to make its application as beneficial as possible for everybody without compromising democracy, fundamental rights and the rule of law.
Suggested Citation
Natalia Menendez and Ernestina Sacchetto. ‘Words don’t come easy: Biometric terminology and its relevance within the AI Act’ (The Digital Constitutionalist, 18 May 2022). Available at <https://digi-con.org/words-dont-come-easy/>.
- 1Article 2 AI Act.
- 2Article 5 AI Act.
- 3Article 4, para. 14 GDPR
- 4Article 3, para. 13 LED
- 5Recital 33 of the Council’s text
- 6Article 3, para. 1, 33.