1) The de-materialisation and re-materialisation of the real and its risks
Efficiency certainly is one of the most immediate benefits of digitalisation. We archive vast amounts of data in storage devices as small as a few centimetres, and we retrieve – among thousands – the exact file we need just by typing a few keywords. One rarely reflects on the quantity of time, space, and material resources the “transition from atom-based to bit-based” reality saves us.
In this sense, although the tech industry is among the most polluting, the de-materialisation of physical objects into their digital counterparts also holds promises in terms of environmental sustainability and de-carbonisation. Part of this is because the more technology advances, the more devices become light, portable, and multipurpose – or simply “smart”.
Technology allows for almost never-ending multitasking possibilities; the most trivial example is the smartphone we now hold in our hands. People can check their emails while launching their favourite music video on a TV screen. When someone rings the doorbell, there is no surprise, as a push notification has already informed them; the food they ordered through an app is going to arrive. Still, they would be able to check whether it is actually the dinner waiting outside the door thanks to the real-time security camera they can control through another dedicated app. Security first, if they were not home, other notifications would have warned them that motion sensors detected some activity around the doorstep.
Digital devices mediate many of our daily actions, and each of those leaves a trail of “digital breadcrumbs”, bits and pieces of our moves, preferences, and more. While the Internet of Things (IoT) allows interoperability between smart objects and facilitates the management of activities pertaining to various spheres of our life, sensors translate inputs from the physical world into data. Everything becomes measurable. Yet, the datafication of the real could not happen without physical hardware somehow “animated” through software and Artificial Intelligence (AI).
Drones or Unmanned Vehicles (UVs) give us a clear picture of the coexistence between the de-materialisation of the real into data and the re-materialisation of immaterial code animating inorganic matter. From recent headlines, a team composed of a small aerial drone and a Quadrupedal Unmanned Ground Vehicle (Q-UGV) is patrolling the archaeological site of Pompeii to spot ‘illicit tunnels used by tomb robbers’ and monitor the structural conditions of the site. In a similar fashion, some industry sectors leverage the broad array of sensors UVs could be equipped with to map facilities and translate them into digital blueprints or to conduct remote inspections in dangerous situations. For instance, when a leak of water, gas, steam, or chemicals is suspected.
As AI transforms common objects into their “smart version”, the evolution of cities into smart cities will make them the ultimate digital ecosystem. From a slightly different perspective, urban spaces steadily become ‘security-scape[s]’, the “natural habitat” for the deployment of advanced tech tools in support of security actions. The adoption of Facial Recognition Systems (FRSs) or other biosensing instruments is the first step toward fully-fledged “reality mining”. When our faces and identities are translated into code, they make possible the monitoring of our movements, habits, relationships, and more, thus creating possible interferences with the free enjoyment of a broad spectrum of fundamental human rights.
Just these days, we are all witnessing examples of digital authoritarianism. In this regard, other than internet shutdowns and surveillance practices, many of the threats recent uses of AI-powered instruments pose to democracy, human rights, and the rule of law (RoL) can be inferred by considering the abuses carried out by authoritarian and illiberal regimes. At the European Union (EU) level, heated debates on the final text of the proposed “Artificial Intelligence Act” (AIA) let similar concerns emerge both from the opinion of policymakers and civil society at large.
2) Individuals on the grid as living meta- and content data
Most discussions about biometric identification systems’ (BISs) widespread uses generally revolve around some valid and well-identified issues. For instance, data on physical, physiological or behavioural characteristics are intrinsically linked with something very intimate, i.e. our identity. Facial features and expressions have a lot to do with both our innate abilities to perceive “others” as human beings and communicate with each other. Of course, if our “physical identity” is nearly constantly verifiable through its automated association with attributed identifiers (e.g. name and surname) and biographical identifiers (e.g. address, date, and place of birth), some would instinctively be more cautious in freely expressing some thoughts, ideas, or aspects of their personality – particularly, when these are in contrast with opinions more easily accepted by the vast majority of a given polity. Put simply, the generalised deployment of FRSs or other BISs in public spaces could severely exacerbate traditional forms of “chilling effects”, i.e. behavioural restrains or changes due to the consciousness of “being watched” by others. In turn, this might hinder the exercise of a broader set of mutually reinforcing human rights, such as freedom of thought and expression or freedom of peaceful assembly. Also, considerable interferences would involve the rights to privacy and data protection. Yet, their nature of “qualified yet enabling rights” plays a fundamental role within the system of human rights protection as the first barrier against i.a. unwarranted or excessively intrusive attention. Privacy rights facilitate the enjoyment of private spheres, which allow for flourishing, free, and pluralistic societies. This is because the very possibility of excluding others from access to certain information about us facilitates the development and free expression of our personality or the exploration of social and political identities.
When a smartphone with multiple cameras, microphones, accelerometer, gyroscope, biometric sensors and a global positioning system (GPS) traces all our moves and digital interactions, one could wonder what the problem would be with populating our cities with BISs for security purposes. It is true, as the recent Pegasus scandal demonstrated, even foreign governments could have hacked our smartphone remotely, accessing the broad range of information it continuously collects. Without going that far, in most cases, it is us voluntarily submitting our precise location and other sensitive data to service providers, be that to look for new friends on social media or the best restaurant in town. The thing is that – theoretically – there might still be room for members of the “privacy cult” to practise acts of resistance by choosing not to access some of these services or adopting privacy-enhancing tools like encrypted messaging apps, VPN, proxy search engines etc. In the most extreme cases, some could just leave their smartphone at home or use it just in certain circumstances while opting for “old school” cell phones from the pre-smartphone era for some specific uses. Faces can difficulty be “hidden, changed or encrypted”.
Yet, truth be told, even the last-mentioned expedient would not be of much help in mitigating exposure to surveillance practices. Traditional phone calls would not benefit from end-to-end encryption either, as they would be at risk of interception. And both fake and real cell phone towers could provide approximate information on our location, as well as other metadata such as the phone numbers of calling and receiving users, call duration, etc. Indeed, for quite some time, interception of communications constituted one of the main and most powerful surveillance tools.
And, precisely in this area, the distinction between meta and content data originated. The former consists of all the data and information concerning a communication – except its factual content, the latter of the substance of that communicative act. With a metaphor, ‘[metadata] consists of the information on the outside of an envelope, while content data relates to the information contained within the actual letter’.
In this sense, imagining a reality where there is no more physical space which is “not mined” by biometric identification and other biosensing systems, one could be tempted to assert that we and our corporeal movements would actually become that envelope. The real-time mining of our facial features would make possible the monitoring of our actions, habits, and relationships in the offline world. The recording and analysis of daily activities, the places we visit, the people we meet, the frequency and amount of time we spend in a specific context would disclose sensitive information – or at least some hints – about our lifestyle, interests, political and religious belief, sexual orientation, and more. In other words, we could become the living metadata of our offline existence. Similarly, progress made in other fields, such as emotion, gait or voice recognition, could reveal more than the mere existence of a certain interaction. By analysing our physical, physiological or behavioural – at times inadvertent – reactions to external stimuli, actions or interactions, such tools would detect something more similar to the content data of our “real world communications and interactions” with others and the surroundings. If you will, the materialisation of surveillance and interception instruments renders individuals living sources of meta- and content data.
For those who consider similar scenarios just dystopian sci-fi, already in 2021, emotion recognition tools were tested as advanced forms of polygraphs in the detention camps of Xinjiang (China). If some claim that ‘what occurs in China will re-occur in western democracies within five to ten years’, it might be the case to consider that at least thirty Danish fitness centres recently deployed an FRS, which allegedly scans ‘some 36000 data points for each face’ to determine ‘mood, gender, age and ethnicity’ of customers and verify if they hold a membership and COVID-19 pass to enter their premises.
On a technical note, none of the privacy rights with which the deployment of similar tools creates interferences is an absolute right, yet in many democracies, their applications would be restricted – at least to uses pursuing a legitimate aim, descending from adequate legal bases responding to proportionality and necessity requirements. The question is whether the guarantees granted by democratic systems based on the RoL would be sufficient to protect us from the risks associated with abuses of such intrusive tools.
3) Security exceptionalism in law enforcement and the rule of law in the EU, some thoughts
Regulation of design and use of modern and emerging technologies based on the “trinitarian formula of liberal constitutionalism” may at least mitigate the adverse effects and risks their deployment could pose to our polities. In this sense, human rights, democracy, and the RoL constitute the founding values inspiring both the Council of Europe and the EU. In this direction, the proposed AIA and its explanatory memorandum make many references to the compatibility of certain AI systems with respect to fundamental rights and EU values. For instance, by following a risk-based approach, it establishes a list of prohibited AI practices presenting a level of risk deemed ‘unacceptable as contravening Union values […]’. Among these, Art. 5(1)(d) AIA prohibits ‘the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement’. However, such uses are far from outlawed as the following sub-paragraphs of the same article provide for a numerus clausus of exceptions. The latter is further accompanied by other elements that ought to be taken into account whenever the use of such instruments is on the horizon.
On this point – as already noted – both bases and indications for such deployments appear rather vague and would leave to law enforcement agencies a broad margin of discretion on the evaluation of the existence or intensity of a number of decisive factors. When it comes to emotion recognition, similar tools or systems are considered as simply “high-risk”, hence falling within the less strict provisions the regulation currently lays down for this category.
If one considers some core elements on which the RoL is based, excessive discretion on the side of public authorities and abuses are precisely what should be prevented. And, at this stage already, even the European Court of Human Rights (ECtHR) deemed some simple uses of FRT creating risks of arbitrariness, making the exercise of state powers ‘obscure’ (para. 86). In this regard, one of the basic ideas behind the RoL is that of limiting the discretion of state authorities in their vertical interactions with individuals. This includes the avoidance of legislative “black or grey holes” either allowing for ‘unfettered discretion’ or creating formal legal protections lacking substantive safeguards, i.e. ‘lawful illegality’.
Having made these points and considering, inter alia, legal certainty, prevention of abuses and access to justice basic elements of the RoL, recent events involving the EU’s law enforcement agency EUROPOL seem rather disheartening. Earlier this year, the European Data Protection Supervisor (EDPS) issued an order against EUROPOL for the erasure of vast amounts of data not respecting the “data subject categorisation” principle. In fact, of particular concern was the creation of a database not responding to the ECtHR S. and Marper v UK doctrine, where the indiscriminate retention of sensitive data of different categories of people had been considered creating the risk of stigmatisation, friction with the presumption of innocence and ‘a disproportionate interference with […] the right to respect for private life’ (para. 125). On a similar basis, considering how EUROPOL was processing large amounts of individuals’ personal data with no established link to criminal activity, the EDPS exercised its corrective powers by imposing both a fixed time limit to filter the personal data EUROPOL obtained and a 1-year limit to comply with the decision.
However, the adoption of Regulation 2022/991 on the processing of personal data by EUROPOL substantially legalised with retroactive effect the processing of data shared with EUROPOL before the entry into force of the amended Regulation, thus nullifying the work of the EDPS, which had started investigating on this matter in 2019. At this stage, the EDPS referred the matter to the Court of Justice of the European Union (CJEU), asking for the annulment of Arts. 74a and 74b of the amended Regulation. EU citizens can just wait and see whether security exceptionalism will supersede EU founding values or if the exercise of judicial power will “close the circle” granting respect for la séparation des pouvoirs and the RoL when executive and legislative branches are unwilling or unable ‘to limit […] discretionary [action]’ by public authorities.
The fact remains that disruptive technologies are here to stay, and the challenges they pose to human rights, democracy, and the rule of law require the urgent adoption of effective measures ‘before [a] dystopian future becomes the status quo’.

Francesco Paolo Levantino
Ph.D. Candidate in International and European Human Rights Law | AI, Modern Technologies, and Surveillance | Sant’Anna School of Advanced Studies (Pisa, Italy).