HomeEssaysMay the 6thThe Courtrooms Strikes Back: Generative AI’s Force in Courts

Related Posts

The Courtrooms Strikes Back: Generative AI’s Force in Courts

Victoria Hendrickx
PhD researcher at KU Leuven, CiTiP-imec.

Victoria Hendrickx is a PhD researcher at KU Leuven, CiTiP-imec. Her research focuses on the use of AI in the judiciary and its impact on the right to a fair trial, in particular on the judicial duty to state reasons. Since 2022, Victoria has assisted with the organisation of the KU Leuven Summer School on the Law, Ethics and Policy of Artificial Intelligence. She is also co-editor of The Law, Ethics & Policy of AI Blog and assistant-editor of the Oxford Handbook on Digital Constitutionalism.

Reading Time: 6 minutes
Print Friendly, PDF & Email

A galaxy ruled by AI: the rise of generative AI systems in courts

The use of algorithmic systems in courts has surged in recent decades. Whereas they were previously being deployed for digitalising courts and providing administrative support, such as facilitating electronic court filing, communication between personnel or case allocation, we recently observed a shift towards more algorithmic tools assisting judges in substantial tasks in the decision-making process. For instance, judges have long been relying on risk recidivism systems or systems that calculate average sentences for crimes and recommend appropriate sentences. More recently, generative AI systems, such as ChatGPT, have gradually been to assist judges in legal drafting. For instance, in 2023, a Colombian judge first relied on ChatGPT in drafting his judgement. Similarly, an Indian High Court judge used ChatGPT to summarise case law. Or closer to us, a court of appeal judge in UK admitted to using ChatGPT to write a part of his judgement. The emerging guidelines on the use of generative AI systems in courts, such as the ones issued in the UK or by CEPEJ, indicate that its use in other countries will follow soon.

With the continued rise of generative AI systems in courts, questions arise as to their effects on the legitimacy of courts and their judicial decision-making. Legitimacy refers to the property of a legal authority, such as courts, that they are worthy of their institutional role and that leads people to believe the authority is appropriate, proper and just. In many of his works, psychology and law professor Tom R. Tyler emphasises the importance of legitimacy. Legitimacy, for example, would make the public more inclined to accept and comply with decisions, and thus creates trust in courts. It prompts the question of whether judges’ use of generative AI systems affects the legitimacy of courts. This blog post explores whether generative AI systems constitute a new hope for the legitimacy of courts or rather feed the dark side by negatively impacting it.

Generative AI systems in courts as a new hope: how it enhances the legitimacy of courts

The legitimacy of courts depends on how society perceives courts and judges. By relying on generative AI systems to aid in summarising case law or facts, judges can allocate more time to critical tasks of adjudication of cases. By employing generative AI systems to assist in drafting decisions, not only can judges expedite their processes, but also effectively tackle the persistent backlog. Generative AI systems can also be deployed as “virtual sparring partners”. The latter refers to judges interacting and using the systems to gain new insights and engage critically with their own line of argumentation. Those efficiency gains allow cases to be heard faster and subsequently strengthen substantial rights, such as the right to access to justice.

The implementation of generative AI systems in courts not only accelerates the administration of justice but also has the potential to enhance its quality, thereby fostering greater trust in the functioning of the judiciary. That way, it can be argued that generative AI systems can increase the legitimacy of courts and their judicial decision-making – similar to other new technologies, such as online proceedings that have been proven to enhance the legitimacy of courts.

The dark side of generative AI systems in courts: how it undermines the legitimacy of courts

Despite the potential of generative AI systems to enhance the legitimacy of courts, these systems may equally undermine their legitimacy. Several factors may play a role in compromising the legitimacy of courts relying on generative AI systems.

One of the most persistent challenges with generative AI systems pertains to the data used to train the systems. In many cases, it is not clear what data was used and how the training process occurred. The so-called black box character of the systems prevents identifying which sources have been selected and whether they are sufficiently representative or accurate. As a result, this brings into question whether the answers generated by the systems are reliable. In addition, under-representative datasets can also lead to the perpetuation and exacerbation of biases. A recent UNESCO research shows that biases present in large language models, which underly generative AI systems, are highly prevalent and manifest themselves in the answers generated by the systems. This may be all the more worrisome in high-stakes contexts such as the judiciary, where trustworthy decisions are essential to the legitimacy of courts. Related is the risk of generative AI systems hallucinating. This refers to systems generating highly plausible but inaccurate or incorrect information. For example, a New York lawyer recently relied on ChatGPT and subsequently cited a false case in his argumentation. This immediately undermines the reliability of the system. By analogy, hallucinations can be just as or even more worrisome in the work of judges.

Legitimacy is further contaminated in the sense that generative AI systems are typically developed by private companies. When these companies design and develop such systems, it entails not only technical choices but also inevitably embedding certain values that can affect judges. For instance, a developer might omit specific case law from its dataset, resulting in unrepresentative output. Moreover, the influence of private companies also inherently touches upon the judicial independence and impartiality. Reflecting on the above-mentioned Indian judge who used ChatGPT to summarise case law, whereas initially seeming innocuous, it underscores how even the summarisation of case law can be influenced by the design and development choices made by private companies. Projecting this to the Star Wars context, do we really want the Pyke Syndicate or Hutt Clan designing the supporting decision-making systems in the Galactic Senate? A related concern is that AI systems – unlike human judges – have not obtained their mandate and authority through democratic processes, such as elections or appointments, but rather emerge without any substantial democratic scrutiny.Concerns also emerge in relation to privacy, data protection and the processing of sensitive data. Take for instance the Colombian judges who sought guidance from ChatGPT on whether to grant medical benefits to a minor. Inquiries regarding children’s healthcare involve sensitive data that require cautious handling. When it is not clear where the data is stored (e.g. on which servers, in which countries), its retention period, and whether it is further processed for other purposes, it emphasises the ambiguity surrounding the systems, potentially undermining the courts’ legitimacy when they rely on them. In fact, a recent study indicates that algorithmic-based decisions in complex cases, arguably those involving the healthcare of minors, are perceived as less trustworthy compared to human decisions.

Finally, ethical concerns further erode the legitimacy of courts. AI systems are socio-technical constructs, meaning that they are not merely technical but also impact society. Therefore, when judges rely on generative AI systems to draft decisions, these decisions carry normative implications for society and its structure. For instance, a Texas judge banned the use of ChatGPT in his courts, citing that such systems are “unbound by any sense of duty, honour, or justice”. Given the scale at which generative AI systems operationalise and generalise, and the way their explicability differs, decisions formed while relying on generative AI systems may affect not only individuals but society at large. Moreover, there is a risk of automation biases, wherein judges overly on algorithms and AI systems despite their unreliability, inaccuracy and lack of robustness, as discussed earlier.

Generative AI systems in courts: a dual-edge lightsaber

As explored, the integration of generative AI systems in courts is not taking place in a far, far galaxy away, but rather extending across the whole legal galaxy. While these systems offer the potential to enhance the legitimacy of courts by streamlining processes and bolstering fundamental rights, they constitute a double-edge lightsaber. Their ability to enhance efficiency and uphold rights stands in stark contrast to the risks they pose to the legitimacy of courts, given their susceptibility to unreliability and inaccuracies.

Therefore, the use of generative AI systems warrants further scrutiny. This is especially true in high-stakes contexts such as the judiciary, where maintaining trust in courts is paramount. Instead of a total ban, guidelines for its use are preferable. One viable approach involves fostering AI literacy within the judiciary, equipping judges with the necessary training to comprehend the risks and limitations associated with generative AI systems. Existing guidelines from CEPEJ or the UK provide a good first attempt at underscoring the significance of technical education on AI and understanding algorithmic functionalities and capabilities.

Another approach would entail focusing on transparency in judicial decision-making. One avenue to consider involves reassessing the judicial duty to state reasons, which refers to the obligation of judges to provide reasons whenever they rule on a case. While most jurisdictions adopt a formal duty to state reasons, requiring reasons regardless of their correctness, it can merit strengthening this duty to more substantive justifications. By requiring a more robust explanation, judges are prompted to evaluate their reliance on generative AI systems conscientiously. Moreover, such measures foster increased transparency and accountability, thereby reinstating legitimacy in courts and their judicial decision-making. Undoubtedly, the integration of generative AI systems presents a Force to be reckoned with in judicial decision-making, necessitating ongoing research to safeguard against rogue judges and the allure of Sith practices.

[citationic]

Featured Artist