HomeSymposiaNotes about the effects of chatGPT4's unpredictability on the law of constitutional...

Related Posts

Notes about the effects of chatGPT4’s unpredictability on the law of constitutional democracies

Reading Time: 7 minutes
Print Friendly, PDF & Email

In this post, I outline some impacts that ChatGPT4’s unpredictability generates on the law of constitutional democracies and suggest some responses to the problems it faces.

1. Unpredictable fields

Borges may have predicted the caricaturization of causality by AI in the digital era through his works such as “Tlön, Uqbar, Orbis Tertius”, “The Lottery in Babylon”, and “The Library of Babel”; which portray society and power consolidated by unknowable decisions. Borges demonstrated our false belief in finished knowledge through his game of chance in causality. Today, machine learning (ML) algorithms, the AI pillars, create an opaque interaction scenario between man and machine (Brynjolfsson and McAfee), shaping a Black box society (BBS) (Pasquale). The autonomy of these algorithms is directly proportional to the unpredictability of their outputs. They recognize patterns within big data through correlations, not causality, only knowing what but not why. This challenges our prediction -the main brain job (Feldman Barret)- which operates on the causality framework. BBS also cuts possible futures by encoding the past, capturing solutions and predictions in schemas from historical events and values that have driven algorithmic programming (Hildebrandt). Thus, accepting a large amount of web text as representative of all humanity, like in large language models (LLMs) such as chatGPT4, may perpetuate dominant viewpoints, and biases, increase power imbalances, and further reify inequality (Bender et al.).

In principle, law fractionates causality by avoiding its opening by chance (Mascitti, 2019). The law’s purpose -to do justice- is based on setting the rules in a certainty context that avoids the chance uncertainty; which is linked to causality because the effects of certain causes are not foreseeable. Therefore, ML-based AI systems strike at the legal architecture columns. In turn, law, as a cultural subsystem, must be aligned to do justice in the digital era. Otherwise, its normative structure could collapse; in that case, the cost would be higher as we would have to rebuild it.

2. The spider’s web of big data and deep learning algorithms

ChatGPT4 operates through a chatbot and belongs to the generative AI genre. ChatGPT4 is a general-purpose AI that uses deep learning and reinforcement learning from the human feedback approach and is trained on extensive amounts of data extracted from the internet. It combines linguistic forms from training data based on probabilities, without understanding the meaning of the text, acting like a stochastic parrot (Bender et al.). LLMs require enormous computing and data resources, which are mostly controlled by Big Tech companies. Big techs have become monopolies and market makers by concentrating their leading roles in both platform and infrastructure levels. Due to this, they are often referred to as infrastructural platforms (Belli). This power concentration raises concerns about discrimination, privacy, security, and environmental impact. Only a few companies, such as Google/Alphabet, Meta, and Microsoft -and its investee OpenAI, have the resources to develop these models, including the pre-trained models offered as part of cloud AI services (Kak and Myers West).

In the following, I mention some effects of ChatGPT4’s unpredictability on the legal normative systems and offer potential solutions to the challenges it produces.

In most constitutional democracies, ChatGPT4 products cannot be considered intellectual property since the specific output cannot be predicted by AI generative’s users nor by the creators and owners of this system. See US Copyright Office, Zarya of the Dawn. For instance, the US Supreme Court of Justice asserted that the “author” of a copyrighted work is the one “who has actually formed the picture,” i.e., who acts as “the inventive or master mind” (Burrow-Giles), and, recently, has rejected computer scientist Thaler’s challenge to the US Patent and Trademark Office’s decision not to grant patents for inventions created by his AI system (Thaler-Vidal). However, imagine that the UK recognized the users’ intellectual property about outputs obtained from the ChatGPT4 based on the English Copyright, Designs, and Patents Act 1988 Section 9(3) (CDPA). The UK is unique in offering copyright protection for works created entirely by a computer. Section 9(3) of the CDPA states, “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the work creation are undertaken;” leaving some ambiguity as to whether this refers to the model’s developer or operator (Vincent). Thus, if ChatGPT4 creates an original output for User X but later generates the same work for User Y, the former could claim copyright infringement. To prove infringement, User X needs to demonstrate that there was copying and a causal link between the two works. If User Y can prove no intention or unintentional copy of User X’s work, it may be challenging for User X to establish the causal nexus by virtue of the system’s unpredictability (Simpson and Miller-McCormack), as we will see in the next item.

In the case of intellectual property non-recognition related to these outputs, data -the source material of these AI systems- returns to the population as a benefit. In other words, there would be data collectivization (but perhaps obtained illegitimately -by, e.g., having no lawful basis for processing, see the order emitted by Italy’s watchdog–  by a private company: Open AI) that facilitates the broad distribution of benefits that arise from this AI system. Nevertheless, the ChatGPT4 rise use could lead to an automation bias on an echo of a voices majority getting left out from the minority discourse, as we saw in Chapter 1, affecting democracy; also consolidating power through this system rather than the AI development democratization.

b) In another place, I asserted that the adequate causality for damage caused by autonomous systems, such as ChatGPT4, is insufficient as a liability element because, in some cases, their actions become unforeseeable for designers, manufacturers, owners, and operators (Mascitti, 2021). Adequate causation plays as a foreseeability in abstract, unlike fault as an attribution factor, which constitutes a foreseeability in the concrete case. Hence, adequate causality relates to the harm foreseeability and determines whether a prudent person would have taken preventive measures against the risk, thus establishing whether such measures could reasonably be expected. Consequently, I consider that a guarantee fund should be created, independent from the contracting of compulsory insurance for civil liability, which will be useful to award the victim of the damage produced by these autonomous mechanisms precisely when, due to their learning system, the unpredictability prevents the corresponding liability assignment.

c) The European Parliament has suggested adding AI-generated texts that could be mistaken for human-generated and deep fakes that say or do things that never happened to the list of high-risk categories in the AI Act. However, generative AI systems differ from traditional AI systems in that they have no intended purpose and scale of adoption, which challenges the current approach in the AI Act in at least three ways: the difficulty of accurately categorizing generative AI systems as high or no high-risk, uncertainty surrounding future risks associated with AI, and apprehension over private risk ordering (Helberger and Diakopoulos). They propose that general-purpose AI systems should be considered a general-risk category in their own right, and subject to legal obligations and requirements that fit their characteristics.

Beyond that, unlike the AI Act’s proposal for unilateral and technical control -see (Ansari and Marda), I suggest the creation of a public-private Algorithmic Center, with multistakeholder involvement and citizen participation, aimed at controlling the design of algorithms for high-risk systems both ex ante and ex post (Mascitti, 2022). This center would enable cooperation among its members to adjust the law to the complex and dynamic nature of the algorithmic empire in the BBS. However, the intended purpose and conditions of use for generative AI models are ultimately defined in the contractual relationship between the user and provider (Helberger and Diakopoulos). For that reason, I propose the contractual terms and instructions revision by the Algorithmic Center, which being made up of public officials, tend to protect the common good; instead of exclusive collaboration between the parties. Both measures, the control of algorithmic and contractual terms and instructions -drawing on the technology companies’ expertise, academic knowledge, the said function by public officials, and democracy’s safeguarding by citizen participation, which allows communities to document harms- will reduce the occurrence degree of unforeseeable damage to fundamental rights.

d) As regards compensable damage, law avoids recognizing legal effects based on chance in situations that exceed the limit of the assumption of the loss of chance. Thus, that chance is the certain objective probability -and not the mere possibility- of obtaining a gain or avoiding a loss, provided that such probability -which is not certainty- is sufficient. The probability must go beyond the hypothetical ground, and the damage certainty requirement -actual or future- is configured based on what happens “according to the natural and ordinary course of things”; in that sense, see, e.g., Article 1727 of the Argentine Civil and Commercial Code (ACCC) and UKSC, Perry v Raleys Solicitors. In the BBS, where automation bias is growing exponentially, a ChatGPT4 output could be wrong –AI systems don’t hallucinate (Klein)- about a person and influence, e.g., election results, the non-hiring of a professional for a given job, and so on. Here, the algorithmic functioning of this system through correlations will deviate from the natural and ordinary course of things, extending or escalating damage unpredictably; avoiding the normative framing as a loss of chance. In that way, ChatGPT4 could become a mathematical weapon of mass destruction -which three distinctive elements: opacity, scale, and damage (O’Neil). Hence, the need to control the algorithmic designs before they enter the market, as we saw in the previous item.

e) Algorithmic transparency is an impossible mission. Thus, a complete explainability requirement could violate the rule of law principle of not requiring the impossible (Mascitti, 2023) due to the unpredictable dynamics of ML in big data that generates the opacity of the algorithms; this is more akin to painting the black box than to make it transparent (Cabitza et al.). However, the multistakeholder cooperation reflected in the Algorithmic Center could facilitate [1] explainable AI robustness based on empirical evidence and [2] justification, i.e., that it’s under the law (Mascitti, 2023).

f) Regarding the preventive action characteristics, e.g., art. 1708 ACCC establishes that it proceeds against any unlawful act or omission. Also, the production or aggravation of the damage must be foreseeable; therefore, the civil liability system acts ex ante having as its limit the protection of chance situations. In this sense, see also, e.g., Articles 37 and 140 of the Italian Consumer Code. Hence, preventive action tends to avoid future, imminent and certain damage. Consequently, the regulation based on the precautionary principle -requires assumptions of scientific uncertainty in the face of possible scenarios of severe and irreversible damage- is needed to reduce the damage risk caused by ChatGPT4 due to its actions’ unpredictability in some cases, e.g., in misinformation at scale. The Algorithmic Center would be key to analyzing the viability of the preventive and precautionary functions.

3. Conclusion

I showed how ChatGPT4 operates as a BBS archetype since it’s a deep learning system that requires an immense data volume -which is accumulated, computational power, and cloud computing –i.e., infrastructure as a tool for data algorithmic analysis used for social, political, and economic control (Brevini and Pasquale); thereby increasing the social complexity through its unpredictability. Hence, I wonder to what degree will the unpredictability of search in ChatGPT4 changes the business model, e.g., of googlization of everything configured by the advertising model and usage data instantiated that generate an automation bias that provides prediction products for surveillance capitalism (Ridgway).

In turn, confronted with ChatGPT4’s unpredictable functioning based on correlations, the law must adjust and anticipate its responses to prevent measures that fall into the causality trap pointed out by Borges and, consequently, are insufficient and impotent. Thus, I offered some answers, e.g., Algorithmic Center, strict liability, guarantee fund, relying on the precautionary principle. Lastly, I propose the law intervention in the economy to enhance or increase human capabilities (Brynjolfsson; Samuel), e.g., through the Algorithmic Center and the creation of incentives for the algorithmic models’ development, including LLMs that promote the flourishing of user creativity, which will serve to foster real democracy through generative AI, along with respect for human rights and the rule of law, which constitute the means of politics.

Mattias Masciti
Lawyer and PhD

Mattias Masciti is a lawyer and PhD in Law from the National University of Buenos Aires (UBA) and Visiting professor at the Center for Technology and Society at FGV Rio de Janeiro.

[citationic]

Featured Artist