HomePostsDigital RightsMore than meets the eye: the harm of generative AI beyond deepfakes

Related Posts

More than meets the eye: the harm of generative AI beyond deepfakes

Reading Time: 5 minutes
Print Friendly, PDF & Email

From celebrities like Taylor Swift to prominent political figures such as Joe Biden, from pornography to politics: the recent deepfake-boom seems to spare no one. With the increasing availability of advanced generative AI tools, not least OpenAI’s latest invention, Sora, for example, creating hyper-realistic deepfake content has become relatively effortless. While the discussion surrounding deepfakes is familiar at surface level, typically concerned with the harm inflicted upon the depicted individual, there is a more nuanced issue here that tends to be overlooked. Beyond manipulating images of existing people, generative AI is also capable of creating perfectly realistic but entirely fictional personas. Current legislative efforts have largely neglected these risks. However, this kind of content may also produce societal harms that demand urgent (legal) attention. 

Harm, beyond the individual

It is understandable that the primary focus is currently on instances of deepfakes involving existing people, given the clear(er) damages present here, related to principles of harm prevention. Deepfake pornography, for instance, raises significant concerns about privacy violations and the exploitation of individuals, primarily women, whose likenesses are used without consent. Similarly, political deepfakes exploit the authority of public figures to spread disinformation, exacerbating issues of privacy invasion and manipulation of public perception. However, even when AI generated content depicts non-existent individuals, we may encounter some profound societal impacts.

From porn … 

The first concern stems from an argument of cultural harm. Generative AI has the potential to exacerbate existing issues present in (‘regular’) pornography. The tendency of generative AI to objectify and sexualize women (as illustrated for example here), contributes to pornography’s portrayal of women as passive objects of lust. This being bad enough as it is, it has been suggested by many (from legal philosopher Joel Feinberg to Kathleen Richardson, founder of the Campaign Against Sex Robots) that this negative stereotyping may, in turn, contribute to the likelihood of an increase in real-world violence due to the propagation- and normalization of these attitudes. Moreover, the advent of generative AI renders pornography essentially limitless. Beyond these concerns (and, in extension of them), lies a deeper risk of the devaluation and erosion of human(e) relationships. Generative AI plays a lead role in the loss of interest in (and, need for) engaging sexually and/or romantically with other individuals. Especially when combined with technologies such as Virtual Reality (VR) and haptics, generative AI may seriously hamper individuals’ incentives and ability to form meaningful relationships with one another. Finally, there seems to be simply something inherently ‘dehumanizing’ about the consumption of content devoid of actual human interaction, let alone entering into ‘relationships’ with virtual entities. While these concerns may seem far-fetched and dystopian, recent studies have evidenced an emerging ‘sex recession’, not to mention the many heartbreaks already caused by chatbots such as Replika (see also r/Replika). 

… To politics 

As the use of deepfakes extends well beyond the realm of pornography, so do its societal impacts. Popular uses of political deepfakes involve politicians disseminating false information, resulting in reputational damage and the inadvertent reinforcement of said claims. However, AI generated content involving fictitious personas may also produce harms worthy of consideration in this context. For example, imagine a video portraying a non-existent ‘expert’ disseminating disinformation in a critical field (see also here). A related fear concerns the advent of ‘botshit’, referring to AI ‘hallucinations’ (where AI generates incorrect information). Alternatively, the speculations of the use of AI generated content in the context of the Israel/Palestine conflict may pose another example in this regard. In any case, it could be said that generative AI has the potential to exacerbate the risks already present in disinformation, fueling further into certain anti-establishment narratives, and undermining the integrity of information ecosystems. 

The current legal landscape

Though far from perfect, there have been several attempts in jurisdictions around the world to regulate deepfake content. However, none of these properly address AI generated content involving wholly fictitious personas. 

In Europe, the Digital Services Act (DSA) aims to tackle ‘illegal content’, which is defined according to the laws of each Member State (article 3h). Although being hailed in as the EU’s flagship against disinformation, the DSA does not take into account the risk of wholly AI generated disinformation as described above. While AI manipulated content copying the likeness of existing individuals will likely  qualify as ‘illegal’ according to several Member State laws already, strengthened (for deepfake pornography) by the recent proposal on a Directive on combatting violence against women, it is unlikely wholly AI generated content will be understood as illegal. An exception to this rule might be the depiction of non-existent children in a sexually explicit manner, but this remains a grey area of its own (see also this INHOPE report). 

The AI Act, on the other hand, is explicitly concerned with AI generated content. Whereas it defines deepfakes as ‘(…) an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (…)’ (article 52(3)), this is only one of the instances of general-purpose AI (GPAI) subject to specific transparency obligations. AI generated content depicting non-existing individuals may well be covered under these provisions. Yet, the question is whether merely disclosing that certain content is AI generated will make any meaningful difference, considering the risks discussed in the points mentioned above. As also illustrated here, people simply do not really seem to care whether the content they are engaging with is ‘real’, or not. 

Other jurisdictions have seen attempts to regulate deepfake technology as well. For instance, in the United States, recent efforts include the Preventing Deepfakes of Intimate Images Act, as well as the recently introduced DEFIANCE Act. However, both proposals are limited to depictions of existing individuals, aligning with the country’s strong First Amendment protection. Indeed, in Ashcroft v. Free Speech Coalition, the Supreme Court argued against a ban on CSAM featuring non-existing children, rendering its scope to be overly broad (in contrast to the majority European approach, also followed in the new Directive against online CSAM). Similarly, the United Kingdom enacted its Online Safety Act over a year ago, which is also limited to existing people. Also note how most legal efforts seem to center around deepfake pornography, with political deepfakes seriously lagging behind. 

The limits of law

While the AI Act acknowledges the concerns associated with generative AI at least in part, it is seriously doubtful whether transparency disclosures will be sufficient to tackle the risks outlined here. As humanity is trying to navigate this new frontier at the intersection between the real and the artificial, there is an urgent need for inclusion of these issues in legal efforts and policy discussions. Moving forward, it is necessary to consider these risks more explicitly and to look beyond the more obvious, immediate harms inflicted on individuals directly involved in AI-manipulated content. Essentially, this will push us to (re)consider the established boundaries of the law. 

The current focus on deepfake content involving existing individuals is understandable. Not only are the threats posed by this kind of content more obvious and tangible but extending the definition of ‘harm’ to include instances involving non-existent subjects also confronts the law with its inherent limitations. For example, it could be argued that the harms presented by generative AI involving non-existent individuals are far too remote. The question then becomes who we are protecting if we criminalize AI generated content involving non-existent individuals. In an attempt to answer this question, this post has suggested that when it comes to the potential risks stemming from generative AI, there is more than meets the eye.

Jessie Levano
Research Assistant at University of Amsterdam | + posts

Jessie Levano is a research assistant at the Criminal Law section of the University of Amsterdam, with a focus on philosophy of criminal law and digitalization. Studying law (LLM) at the VU Amsterdam, she completed a specialization in International Technology Law and is currently enrolled in the European and International Law track.

[citationic]

Featured Artist