Skip to main content

Establishing an Ethical Framework for Autonomous Vehicles

Image Source: Best/stock.adobe.com; generated with AI

By Mark Patrick, Mouser Electronics

Published December 30, 2025

Driving can be a monotonous experience. Sometimes, we get into the car and drive, with just a rough estimate of when we should arrive. However routine the driving might be, we still need to be alert and attentive throughout the journey, watching out for potential accidents, particularly during adverse weather conditions and in busy urban areas. We use our experience, gathered over the years, to decide what might lead to an accident, what situations to watch out for, and the actions we might need to take quickly.

Sensing technologies provide the essential data for the vehicle’s autonomous systems to create a virtual, real-time map of its surrounding environment. The map is further enhanced with information gathered from vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Armed with these vital information sources, autonomous machine-learning neural networks can navigate the journey safely, constantly watching out for and acting on potential dangers. A self-driving vehicle is inherently more diligent and reliable than a human driver, who might become distracted or make an error of judgment. However, there are still situations where an accident might occur with fatalities likely. For carmakers, enabling autonomous vehicles to make decisions based on life-or-death scenarios introduces moral and ethical challenges.

As we head toward fully autonomous vehicles on our roads, the ethical and moral dilemmas that autonomous systems will need to deal with are confronting the whole breadth of the automotive industry, from consumers and legislators to insurance providers and vehicle manufacturers. There are some difficult decisions ahead, and these are likely to receive high visibility in the press. Unfortunately, incidents have already occurred with self-driving vehicles where, for example, the sensing algorithms have not inferred the potential of an impending accident or detected that another vehicle was passing dangerously close in front of it.[1] Even with the sophisticated simulation techniques and extensive trials these systems undergo, the likelihood of a wild-card, random, or unpredictable scenario happening is always present.

The Ethics of Life or Death

The decisions we make in situations where a fatality is unavoidable are extremely complex. There are split-second decisions we make more on instinct than through an in-depth analysis of the situation (Figure 1). How do we, as humans (let alone an autonomous vehicle), decide whether to swerve to avoid hitting a pedestrian and instead put multiple lives in jeopardy by hitting a sidewalk café? Would an autonomous system be programmed to recognize how many people are on a bus or in a café?

Figure 1: Autonomous vehicles can identify hazards, but will they make the correct avoidance decisions? (Source: scharfsinn86/stock.adobe.com)

Attitudes toward which action humans might take also vary by region, as an article in Nature magazine on the moral compass dilemma highlighted.[2] The study aimed to research and develop a set of global, socially acceptable principles for autonomous vehicles when faced with these moral and ethical situations. The regional variation in attitudes, if agreed at a national level, could be accommodated with fine-tuning neural network parameters.

In Europe, the European Commission is working toward allowing fully autonomous vehicles on defined stretches of the European road network. In July 2022, the Vehicle General Safety Regulation stipulates a minimum statutory set of advanced driver assistance systems (ADAS) required on all road vehicles in addition to the technical requirements for Level 4 and Level 5 fully autonomous vehicles.[3] Within the EU, progress on tackling the thorny issue of ethics is fragmented, with nation-states sometimes opting to push ahead by themselves. For example, Germany established an ethical code of conduct for autonomous vehicles in 2017.[4] Prioritizing human life and personal injury over that of animals or property becomes a key learning requirement for the machine learning algorithms faced with an unavoidable incident. The UK Government has yet to fully engage in the ethical debate, although it has issued guidance for conducting trials of self-driving vehicles on public roads within set criteria.[5]

Who Takes the Blame?

Attributing responsibility for an accident becomes another interesting challenge. When assessing fault among human drivers, insurance companies typically agree on who caused the accident, based on accident claims from the drivers involved and reports from emergency services. Vehicles that meet SAE Levels 4 and 5 of driving automation do not require any human intervention.[6] In these cases, determining the cause of the accident may include the automotive manufacturer, the developer of the algorithms, sensor manufacturers, or, perhaps, a human driver in another vehicle. A potential gray area might exist with Level 3, which stipulates that a human driver needs to take over if the autonomous systems deem it necessary. Clearly, the nominated driver might have little time to react or may not be capable of doing anything to prevent an accident from occurring.

Perhaps one advantage of investigating an accident involving an autonomous vehicle is the access to the vehicle’s data logs. In California, it is a legal requirement for autonomous test vehicles to provide a complete set of system data in the 30 seconds leading up to the crash.[7] However, whether this requirement becomes the norm for every autonomous vehicle sold around the world is still uncertain.

Despite many challenges ahead, whether technical or moral, the goal of autonomous vehicles is to make our roads safer for all users, cut congestion, and reduce pollution. It’s only a matter of time before a socially acceptable moral framework can be implemented.

 

Sources

[1]https://www.transportation.gov/sites/dot.gov/files/2024-08/HASS_COE_Understanding_Safety_Challenges_of_Vehicles_Equipped_with_ADS_Aug2024.pdf
[2]https://doi.org/10.1038/d41586-018-07135-0
[3]https://ec.europa.eu/commission/presscorner/detail/en/ip_22_4312
[4]https://www.bmv.de/SharedDocs/EN/publications/report-ethics-commission-automated-and-connected-driving.pdf
[5]https://www.gov.uk/government/publications/trialling-automated-vehicle-technologies-in-public/code-of-practice-automated-vehicle-trialling#safety-driver-and-operator-requirements
[6]https://www.sae.org/news/blog/sae-levels-driving-automation-clarity-refinements
[7]https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=38750&lawCode=VEH

About the Author

Part of Mouser's EMEA team in Europe, Mark joined Mouser Electronics in July 2014 having previously held senior marketing roles at RS Components. Prior to RS, Mark spent 8 years at Texas Instruments in Applications Support and Technical Sales roles and holds a first class Honours Degree in Electronic Engineering from Coventry University.

Profile Photo of Mark Patrick