Companies are to blame for a self-driving car accident. At least, this is what the recent joint Report – published on the 26th of January 2022 – of the Law Commission of England and Wales and the Scottish Law Commission on the regulation of automated vehicles (AVs) seems to conclude (hereafter “The Report”). Legal actors on the other side of the Atlantic, though, seem to have a different opinion. Criminal charges have been recently brought against the human drivers of AVs involved in (fatal) traffic accidents; and not against the companies that manufactured them.
In such a complex and inconsistent legal landscape, thus, there are only two things one can conclude with certainty: cars with self-driving features will soon be on the road, and a final response as to who will bear the criminal liability for their accidents is yet to be found. The aforementioned UK “Report” offered the first thorough and comprehensive response on the criminal liability question. It pointed towards the criminal liability of the companies which are putting an AV on the market without being honest about the vehicle’s safety. This “Report” is worth analysing in more detail, as it may as well “drive” the discussion on the thorny topic of criminal liability and accidents involving AVs. Before examining the Report’s answers, though, it is important to have a clear understanding of the question: what is the technology behind AVs, and how does it affect the allocation of criminal liability for an accident “caused” by them?
1. What are self-driving cars, and who drives them?
There is not a single type of self-driving car. According to the industry, there are five levels of automation and – as it will be obvious when analysing the “Report” – the allocation of criminal liability may vary according to them. In more detail, Level 0 (zero automation), Level 1 (Driver assistance – e.g. adaptive cruise control), and Level 2 (Partial automation – where a car can execute dynamic driving tasks like speeding or accelerating but the driver must at all times monitor and overrule the system) do not fall under the concept of a “self-driving” car. At these levels of automation, the person behind the wheel is the actual driver of the car, required to constantly supervise these support features to maintain safety.
The legal challenges are coming with the transition from Level 2 of automation to Level 3 (Conditional automation – the driver is not required to monitor the system but must respond to a takeover request), Level 4 (High automation) and Level 5 (Full automation). In the last two, the automated driving features are so sophisticated that no one will be required to take over driving, even if seated in the “driver’s seat”. It is in Levels 3, 4 and 5 that the real question of “Who is driving a self-driving car?” emerges. Greater autonomy of the AVs equals less human control over them and much less certainty as to who is responsible for the harm caused by a vehicle that drives itself.
The scholarly debate (see, for instance, here and here) recognises three possible candidates could be held criminally liable for the harm materially brought about by a self-driving car: the human behind the wheel, the company/software developers that manufactured the car and, most provocatively, the self-driving car itself.
Needless to say, they all come with their fair share of objections. Most of them stem from the very substance of the Criminal Law itself, i.e. the fact that culpability is the crucial criterion for responsibility in criminal cases. Contrary to civil law cases in which liability for damages could be imposed without proof of fault on anyone’s part, in Criminal Law, one needs something more. There is a need to prove that someone must have done something wrong; someone must have intentionally or knowingly programmed an AV to cause criminal harm (e.g. by remotely hacking the car) or – which is far more common – someone must have been criminally negligent as to the occurrence of the harm.
Usually, with “normal” traffic accidents, it all comes down to the negligence of the human driver. The causation of the relevant harm (e.g. death or injury) is attributed to the human driver if the driver could have foreseen the harm and failed to exercise the due care necessary to avert it. How can one foresee, though, what an autonomous self-driving car which does not need monitoring (Levels 3, 4, 5 of automation) will do? Would not it be unfair to hold responsible someone who – according to the “Report” – has been told that she does not need to pay attention to the driving task? It will be difficult – to say the least – to prove negligence on the part of the human behind the wheel for what the AV did when its self-driving features were engaged.
Perhaps, it would be more plausible to search for criminal negligence not for what the AV did but for how it was manufactured and the malfunctions that led to the accident. This shifts the criminal liability question from the human behind the wheel to the companies which manufactured the AVs and vouched for their safety. It is the companies that drive the AVs; thus, culpability in any meaningful sense can only be found in their failure to exercise due diligence before putting an AV on the road. This seems like a sensible approach to the problem. However, the well-known objections regarding the fact that corporate criminal liability is not generally recognised in every legal system, as well as the “hindering of innovation” concern “put the brakes” on the unanimous acceptance of this solution.
Finally, the third candidate for the allocation of criminal liability, i.e. the AV itself, is not currently considered a feasible option in any of the regulatory efforts to tackle the criminal liability for accidents in which AVs are involved. So, this very interesting discussion is going to be set aside for the sake of presenting the progress that has been made in the last few months on Criminal Law and AVs in the jurisdictions of the United Kingdom and the United States.
2. Companies as drivers: The comprehensive joint Report of the Law Commission of England and Wales and the Scottish Law Commission
The “Report” that was released on the 26th of January 2022 recommended the adoption of a new “Automated Vehicles Act” and set out new regulatory regimes and legal actors. Plausibly enough, its most important feature is that it establishes a liability shift from the human driver to manufacturers and software developers; basically, human drivers are granted immunity from civil and criminal liability.
In more detail, the new regulatory regime of the Report rests on three recommendations. Firstly, a regulatory regime on AVs should begin with the question: “When a vehicle is self-driving?”. In other words, the “Report” proposes writing the test for “self-driving” into law to establish a clear distinction between self-driving and driver assistance features (i.e. a clear distinction between Level 2 and Levels 3, 4 and 5 of automation).
If a vehicle is self-driving according to law, the next question to be asked is: Is it safe? Who must prove the safety of an AV, and what level of trustworthiness would be enough? Here, the “Report” proposes two new safety assurance schemes, one before and one after the AV is put on the road. The pre-deployment safety of the AV is composed of two stages: the AV needs to be approved (like “normal” cars) and then be specifically authorised as having self-driving features. The vehicle manufacturer or software developer – called the Authorized Self-Driving Entity (ASDE) – is the one responsible for putting the AV forward for authorisation. Most importantly, the ASDE has the burden to prove that the AV is safe by providing a safety case, signed by a nominated person who is subject to a “duty of candour” and would face criminal sanctions when failing to exercise due diligence to ensure that the information provided in the safety case is correct and complete. The after-deployment safety of the AV will be ensured by the creation of an in-use regulator, charged with the duties of evaluating the safety of an AV, investigating traffic infractions and imposing regulatory sanctions to the ASDE.
Finally, apart from the ASDE, the “Report” recommends the creation of two further new legal actors. The first one is the “User in Charge”, i.e. the human driver behind the wheel, who will be immune from all offences arising from dynamic driving while the self-driving features of the car are engaged. The second one is the No User in Charge (NUIC) operator who will oversee the AV when there is no User in Charge, i.e. when there is no human driver inside the car. Usually, the NUIC operator will be the same legal actor as the ASDE. That means, in simple terms, that when a company manufactures an AV that does not require a User in Charge, then it has the simultaneous responsibility of establishing an NUIC operator that will remotely oversee the AV and would face regulatory sanctions by the in-use regulator.
It is obvious so far that the regulatory sanctions are far more widely applied in the “Report” than the criminal ones. Culpability and, hence, criminal liability is only to be found where misrepresentations and non-disclosure of crucial information by the ASDE and the NUIC operators have implications for the safety of AVs. In that case, criminal liability can be allocated to the company itself, to the senior managers of the company (only if they knew about the wrongdoing or if they wilfully ignored it) and – as previously mentioned – to the nominated person who has the duty of due diligence when signing the safety case.
3. Humans behind the wheels as drivers: US proceedings
Elaine Herzberg was the first pedestrian to be killed by a self-driving car in Tempe, Arizona, in 2018. Two years later, prosecutors decided to file charges for negligent homicide against the backup safety driver behind the wheel of the self-driving Uber that was involved in the accident; and not against Uber itself. The driver pleaded not guilty after the indictment, and the case was scheduled for February 2021; however, the trial is still pending.
What is important here is that not only the self-driving features of the AV were engaged at the time of the accident but also, by the time the car’s “brain” realised that an emergency stop was needed, it did not alert the safety driver (by issuing a transition demand). The Arizona Uber crash, thus, was not just a case of human error — it was also a failure of technology, but a human seems to have been scapegoated for this.
It should also be noted that according to the UK “Report”, in a case like this, the driver would be immune from criminal liability since the self-driving features of the car were engaged at the time of the accident, and there was not a transition demand issued by the car; the “User in Charge” never became a driver. Another human driver of an AV was criminally charged a few months ago in the US. Specifically, Los Angeles County prosecutors filed two counts of vehicular manslaughter against the driver of a Tesla, which – while Autopilot was engaged – ran a red light, slammed into another car and killed two people in 2019. According to the UK “Report”, the allocation of criminal liability, in this case, would be dependent on whether this car would fall under the legal definition of a self-driving car; if so, any meaningful search of culpability would lead to the Tesla Company which allowed a non-safe AV to be put on the road.
Conclusions
At the end of the day, where this discussion leads us? I would say there are three things one could conclude with certainty.
- The fact that Europe was not a part of the previous discussion is telling in itself; the EU is one step back. Admittedly, there is currently a drafting committee tasked with the elaboration of a legal instrument on AI and Criminal Law with a focus on vehicles and automated driving. This committee had its first meeting on the 15th-16th November 2021. However, the discussion was focused on the legal form that this future instrument should have, not yet on its subject matter. As for the AI Act of 2021, the novel regulatory framework that it produced did not touch on the substantive Criminal Law issues that are of interest here.
- A legal definition of what a self-driving car is and is not will likely be much more important than the current legal literature takes it to be. It will be the gateway to the legal debate on the allocation of criminal liability.
- Out of the three candidates that could be held criminally liable for the harm materially brought about by a self-driving car, i.e. the human behind the wheel, the company/software developers that manufactured the car and the self-driving car itself, only the first two seem to be part of the legal debate – at least for the foreseeable future.
Who should be blamed, thus? The companies (UK approach) or the human drivers (US approach)? This is a huge discussion, and only some introductory (and personal) thoughts can be offered here.
Starting from the pragmatic concerns, it is commonly argued that a potential criminal liability of the companies would stifle innovation. However, the criminal liability of the drivers would lead to the same result. Consumers would be disincentivised from buying fully autonomous cars; lesser demand would lead to less production and – again – to the hindering of innovation. The only way to avoid “stifling” the innovation would be to take criminal liability out of the picture. In this scenario, though, one should explain to the victims, their families – and to society in general – why a crime brought about by a self-driving car should be left unpunished (even if somebody paid the civil damages arising from it). This is the problem of the “retribution gap” for AI committed criminal harms.
Finally, moving to doctrinal concerns, the puzzle of the self-driving car accidents is that, on the one hand, criminal liability is, arguably, needed but, on the other hand, one cannot allocate it either on humans or on companies without “messing” with the Criminal Law doctrine. As for the “humans behind the machine” (i.e. the drivers, producers, manufacturers etc. of an AV), holding them criminally liable for the unforeseeable malfunctions of an AV would go directly against Criminal Law’s culpability principle. That is the foundational principle aiming to safeguard that only the morally culpable would be convicted; in other words, it would be unfair. As for the companies, many national legal systems (mostly under civil law) resist recognising the direct criminal liability of corporate entities themselves (and not of the humans acting on behalf of them); the proposal to allocate criminal liability on companies would “mess” with their own criminal law doctrine. At least, though, it would not be unfair.
Suggested citation
Eleni Nerantzi, ‘There is someone to blame for a self-driving car accident, and it is the companies’ (The Digital Constitutionalist, 28 March 2022) <https://digi-con.org/there-is-someone-to-blame-for-a-self-driving-car-accident-and-it-is-the-companies/>

Eleni Nerantzi
Elina Nerantzi is a PhD Researcher in Law at the European University Institute (EUI). In her PhD Project she tries to find out who (if any) should be held criminally liable for a crime committed by an autonomous Artificial Intelligence (AI) System. Elina has obtained her Master's Degree in Law (Magister Juris) from the University of Oxford and her bachelor's degree from the Law Faculty of the National and Kapodistrian University of Athens.