HomePostsDigital StateCompanies’ liability for self-driving cars: what about criminal evidence?

Featured Artist

Related Posts

Companies’ liability for self-driving cars: what about criminal evidence?

Reading Time: 7 minutes
Print Friendly, PDF & Email

In Nerantzi’s post for DigiCon entitled “There is someone to blame for a self-driving car accident, and it is the companies”, she diligently analyses the (complex) joint Report—published on the 26th of January 2022—of the Law Commission of England and Wales and the Scottish Law Commission on the regulation of automated vehicles (AVs). According to the Report, companies are to blame for a self-driving car accident.1We encourage you to read her post for a clear overview of the Report. While courts in the US seem to adopt a different opinion, in the UK, the Report recommends the adoption of a new “Automated Vehicles Act” establishing a liability shift from the human driver to manufacturers and software developers.

In this response post, we want to draw attention to procedural issues arising from this shift. We believe that choices of substantial criminal law always intertwin procedural aspects, which policymakers shall not overlook.

Entanglements between criminal law and procedure

Criminal liability presupposes that an action or omission creates harm or a forbidden risk to rights or interests protected by a previous law which qualifies such action or omission as a crime. It also requires that such action is attributable to an agent whose conduct is considered non-justifiable and culpable. Even if overly simplified, this description of criminal liability may guide us to understand some of the challenges that self-driving cars create to criminal law and criminal procedural law.

The attribution of an action to an agent requires that such action is adequate to produce harm (or to increase a forbidden risk). In contrast, an omission presupposes that such an agent was obliged by specific duty to protect the rights or interests protected by law and has failed to fulfil it – and that such failure created harm or risk to those rights or interests. 

The framing of criminal liability as an action (intentional or negligent) or as an omission2cf. Gless, Silverman and Weigend (2016). has decisive procedural consequences in the case of self-driving cars.

Let us think of software developers’ criminal liability as a hypothesis. Where they acted intentionally to create harm by programming the car accordingly, the prosecution is required to prove that her action was relevant in the causal chain that generated the harm. If they acted negligently (e.g., they did not act with due diligence in some of the phases of the development process), the prosecution needs to prove that they did not observe a contextual duty of care during the development cycle. In case the legal order determines that a concrete and specific duty is required from software developers (more demanding than the one required by negligence), the prosecution needs to prove a failure in observing such concrete duty and that such failure caused harm. Similarly, the human driver may intentionally or negligently (and in some legal orders by omission) cause harms relevant to criminal law.

Differences in framing criminal liability either as an action (intentional or negligent) or as an omission impact both substantive and procedural criminal law. As self-driving cars may move across jurisdictions where criminal liability is understood and framed differently, the Feasibility Study (2020) conducted by the Council of Europe Working Group on AI and criminal law suggested that an international convention should cover them. Such convention would establish a common, basic framework concerning the use of AI in criminal law that would increase legal certainty and facilitate cooperation among criminal authorities, for instance, concerning the transfer of criminal proceedings and obtaining evidence. This may bring relevant input on the standard of proof or may even provide some principled basis for the judicial balancing between the right to remain silent and truth-finding.

However, the political will to adopt a legal instrument such as the convention mentioned above does not liberate us from question zero: should criminal liability rule harm be attributed to self-driving cars?

A relevant hurdle for criminal liability is the proof that concrete harm is attributable to the conduct of the agent (either the software developer or the human driver). There might be cases where the interaction between code, driver and environment is so complex that criminal liability becomes a hypothetical exercise. In particular, determining the causal (or even the moral) relation between the conduct of a software developer throughout the whole process of manufacture and the harm produced by a self-driving car might be a probatio diabolica. Contrary to private law, where a strict liability solution could be adopted, in the realm of criminal law, that would violate the presumption of innocence and, in a broader sense, the principle of blameworthiness.

In the same line, we must ask the extent to which it is socially acceptable to consider that criminal law criteria for attributing responsibility cannot operate in the case of self-driving cars without affecting the rule of law – even where serious harms, such as death or physical injury are at stake. In practice, the inability of criminal law criteria to objectively attribute harm to an agent leaves an imputation vacuum that, in the end, leads to what some authors call the retributive gap.3Danaher (2016).

If criminal liability is to be summoned to rule (at least some of) the harms caused by autonomous vehicles, it is relevant to define the standard of care to be demanded from software developers and drivers. The expected due diligence, risk foresight and other concrete duties must be detailed to the extent possible, as criminal liability can only be asserted where the potential agents are able to know the conduct that they may be liable for (lex certa principle). While those duties are defined by substantive criminal law, they will be relevant to guide the criminal procedure in terms of facts to look for and evidence to be produced in court.

Companies’ liability: a procedural deadlock

Focusing on evidentiary aspects of criminal cases involving self-driving cars is crucial. Following the UK’s approach to self-driving cars, the software developer is liable when negligent in programming the software and not applying sufficient safety measures. The software causes harm, and the developer is responsible if they do not program it with due diligence. The public prosecutor can prove this conduct in different ways, such as accessing the technical documentation, analyzing the software’s source code, or questioning the developers. At the same time, however, this information is in the hands of the same subject (the software developer) that can potentially be held accountable.

These are cases that may give rise to a procedural paradox: while the software provider has, on the one hand, access to the main evidence to prove the case, on the other, they also enjoy the right to remain silent and not to contribute to incriminating themselves under Article 6(1) ECHR.4cf. O’Halloran and Francis v. the United Kingdom, § 45; Funke v. France, § 44. More specifically, the ECtHR has clarified that this right does not only protect against the making of an incriminating statement per se but also covers the obtaining of evidence by coercion or oppression. Therefore, requesting the software provider to collaborate, handling documents, or questioning the developers could violate the right to remain silent. Is there any way to prove these crimes without infringing the right to remain silent?

Twofold possibilities

There are two possible ways to address this impasse. First, a proper balancing shall be struck to safeguard the interest in prosecuting crimes and obtaining evidence with the right not to collaborate. While the developers should never be compelled to provide incriminatory statements or testify, access to material evidence through a warrant does not necessarily violate the right to remain silent. In this sense, the ECtHR has been more condescending in cases where evidentiary material is obtained “from the accused through recourse to compulsory powers but which has an existence independent of the will of the suspect, such as documents acquired pursuant to a warrant, breath, blood and urine samples and bodily tissue for the purpose of DNA testing”.5cf. Saunders v. the United Kingdom [GC], § 69; O’Halloran and Francis v. the United Kingdom [GC], § 47; see, however, Bajić v. North Macedonia, §§ 69-70

A bird-eye view would suggest that, beyond a situated conflict of principles, this impasse reflects two different procedural models,  i.e., the crime control model and the due process model6cf. Packer (1964); Hildebrandt (2015).—which is a different nomenclature than the canonical division between accusatory and inquisitorial models.

Second, the public prosecutor could resort to other means to obtain evidence. The software, where the evidence lies, could be scrutinized through other means and techniques which do not require the developer’s collaboration. For instance, Diakopoulos proposes ‘reverse engineering’ to investigate Black Boxes. In his words, “Reverse engineering is the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works”.7Diakopoulos (2014). Reverse engineering is a form of ‘decompilation’8A Dictionary of Computer Science (2016). whose main advantage is that it works without the source code. Based on the observation of input and output, one can “reverse engineer what’s going on inside”.9Diakopoulos (2014). Other practical techniques to investigate how the software works include statistical analysis of outcomes or sensitivity analysis.10For an overview, see Koene et al. (2019).

While resorting to different techniques avoids clashes with the right to remain silent, it is crucial to question whether they are sufficient to meet the standard of proof. It can be argued that reverse engineering is not enough to provide an accurate account of the software’s inner workings, at least not beyond any reasonable doubt. Concerning statistical analysis, the caveat is what constitutes acceptable statistical outcomes and what standards apply. In other words, when is statistics sufficient to prove negligence on the developer’s side?

Conclusions

In this response post, we wanted to draw attention to a crucial procedural issue which accompanies substantial criminal law policy. We believe that a priority in this field is to ascertain whether there is a way to prove these crimes without infringing the right to remain silent.

Legislators opting for criminal liability for companies must bear in mind the procedural repercussion of this policy choice. Accompanying the substantial criminal law with well-crafted substantive and procedural rules is vital. These rules should balance the state’s powers in issuing warrants for obtaining documental and electronic evidence, such as source codes of the programmes, with the right to remain silent afforded to these subjects. While techniques that do not require their collaboration should be preferred, it should be made clear if they are sufficient for the criminal standard of proof.

If evidence from the provider cannot be obtained without affecting this right and if other techniques to investigate the software are not sufficient to meet the standard of proof, we should reconsider the choice of placing criminal liability on companies and its societal consequences.

Self-driving vehicles challenge jurists to question the assumptions and the effects of summoning criminal law to assert liability either of software developers or drivers. The constitutional dimension of this problem signals its delicacy, complexity and, perhaps, the need to reinstate question zero.

Suggested citation

Tatiana Duarte Nicolau and Francesca Palmiotto Ettorre. ‘Companies’ liability for self-driving cars: what about criminal evidence?’ (The Digital Constitutionalist, 2 May 2022).

Tatiana Duarte Nicolau
Researcher at Vrije Universiteit Brussel | Website
+ posts
[citationic]

Featured Artist