ADAS Takes the Wheel in Level 3 Vehicles
From Assistance to Autonomy: Level 3 and Beyond
Image Source: temp-64GTX/Stock.adobe.com
By Matt Campbell, Mouser Electronics
Published April 30, 2024
Following guidelines set by the US National Highway Traffic Safety Administration (NHTSA), almost every new vehicle for sale in the United States has at least some form of advanced driver assistance system (ADAS). According to Consumer Reports, over 90 percent of new cars in the United States have adaptive cruise control, and half of new vehicles can control steering and speed simultaneously.[1] However, only one automaker in the US offers a system that meets Level 3 of the SAE Levels of Driving Automation™, which allows drivers to take their eyes off the road while ADAS features are active.
In this article, we'll learn more about this first Level 3-compliant system, SAE Levels, and the technology and considerations behind this next step toward fully autonomous vehicles on the road.
Getting Up to Speed
Mercedes-Benz introduced the first SAE Level 3 features for consumers in the US with the DRIVE PILOT option on 2024 S-Class and EQS sedans. While there are currently Level 4 vehicles on the roads in the US, they are commercial ride-hailing vehicles and are not available for purchase. Following in the (tire) tracks of Honda’s 2021 launch of the Level 3-equipped Legend in Japan, Mercedes-Benz will also assume liability for any accidents caused by vehicles while Level 3 features are active. These vehicles are not only at the cutting edge of automotive technology, but they also push the boundaries of regulatory frameworks with an unprecedented handoff of responsibility from the driver to the vehicle. As ADAS technology matures, so too must the legal and regulatory fields.
SAE Levels of Driving Automation
First, let's review what a Level 3 autonomous vehicle is. SAE International established six levels of driving automation, determined by the amount of user intervention required (Figure 1). The chart illustrates the significant jump from Level 2 to Level 3. A Level 2 vehicle can maintain speed and stay in its lane but requires the driver to have their hands on the wheel and continue paying attention. That is, you are still driving. However, at Level 3, you are not driving when the automated features are engaged. This means you can take your hands off the wheel and your attention off the road.

Figure 1: SAE Levels of Driving Automation. (Source: SAE International)
The gap between Levels 2 and 3 has as much to do with legality as it does with technology. Both Level 2 and 3 vehicles can maintain their speed behind another car and stay in their lane. The difference is that if you get into an accident while driving your Level 2 vehicle, you are liable. If you get into an accident while Level 3 features are active, the automaker takes liability. This important distinction means that in the eyes of the law, the driver is not actually "driving" a car under Level 3 automation.
Herein lies the most difficult challenge in moving from Level 2 to Level 3—the human factor. A driver of a Level 2 vehicle can easily take manual control of the vehicle if they have been paying attention to road conditions while autonomous functions are active. Humans are notoriously inefficient at task switching, so a driver particularly engaged with a book or mobile game needs extra time to get back into "the zone" of driving.
A study from the German Insurers Accident Research placed drivers in a driving simulator emulating a Level 3 vehicle requesting a driver takeover while the automated features were active.[2] The results showed that drivers who were fixated on a particularly engaging non-driving task (playing Tetris on the vehicle’s infotainment console) took five seconds longer than engaged drivers to get their feet on the pedals, hands on the wheel, and eyes on the road, speedometer, and rearview mirror.
Five seconds may not seem like a long time, but a vehicle at highway speeds of 120km/h will travel 165 meters in that time. Automakers must weigh the risks and rewards of Level 3: Relying on a distracted person to quickly switch tasks creates a major vulnerability in the system. For example, test drivers fell asleep at the wheel during early tests of Ford’s self-driving cars.[3] Mercedes-Benz’s DRIVE PILOT system mitigates the human factor by putting boundaries on what the driver can do when self-driving is active. Cameras on the driver make sure they are awake and facing the road, and sensors detect if the seat reclines. Mercedes-Benz clearly learned from Ford’s sleepy test drivers. Drivers can’t surf the web on their phones due to US laws, but the vehicle’s infotainment system offers web browsing, apps, and games.
The Technology That Led to Level 3
Autonomous vehicles use primarily three technologies to see their surroundings: radar, lidar, and cameras.
Radar
Radar has an established history in the automotive world, with radar-based blind spot monitors rolling out in the mid-2000s. Radar systems measure the distance and velocity of objects by emitting radio pulses and measuring the reflected pulses. Today, automotive radars operate in a frequency band at 77GHz, offering higher resolution and requiring smaller antennas than legacy 24GHz automotive radars. Automotive radars are used in collision detection, adaptive cruise control, and proximity monitoring. Radar has the advantage of operating in conditions where even human drivers struggle to see, such as rain and fog.
Lidar
Lidar operates in a similar manner to radar, except it uses light pulses in the near-infrared spectrum. Automotive lidar systems emit wavelengths of 905nm or 1550nm, enabling exceptional resolution compared to the millimeter-wavelength 77GHz radar. The wide field of view and high resolution of lidar enable autonomous vehicles to create a 3D map of their surroundings to track other vehicles, lane markers, pedestrians, and objects on or near roads. The tradeoff of lidar is that it is more expensive than radar and camera systems. For example, the large spinning units covering early autonomous vehicle prototypes were tens of thousands of dollars each. Demand from the automotive industry has led to purpose-built lidar systems that are more compact and affordable, bringing the cost per unit down by a factor of ten or more compared to early automotive lidar systems.
Cameras
The first backup camera debuted in 1956 on a Buick Centurion concept car.[4] In more recent years, advances in digital cameras for mobile applications have given automakers plenty of options for cameras that offer exceptional performance in a compact size. Cameras to assist drivers have become standard features since 2018, when NHTSA mandated backup cameras on vehicles sold in the US. Canada and the EU followed with similar mandates. Today’s cars have blind-spot cameras and 360-degree-view cameras, which both the driver and the vehicle can use to see what’s around the vehicle. Cameras offer even higher resolution than lidar and can quickly identify a wide variety of objects when paired with real-time image processing.
Radar, lidar, and cameras each have unique strengths and weaknesses. Automakers must analyze the tradeoffs of each system to find a solution that meets their requirements and budget. Table 1 summarizes the key pros and cons of each sensor type.
Table 1: Comparing automotive vision sensor types.
Sensor Type |
Advantages |
Disadvantages |
Uses |
---|---|---|---|
Radar |
|
|
|
Lidar |
|
|
|
Cameras |
|
|
|
Many automakers rely on multiple complementary sensors to create a robust automotive vision system that performs in various conditions. Mercedes’ DRIVE PILOT system uses cameras, lidar, radar, and ultrasonic sensors to build a comprehensive picture of the vehicle’s surroundings.
Are the Roads Ready?
We’ve written before about the challenges unpredictable road conditions create for autonomous vehicles. The most advanced autonomous system can’t follow lane markers that are too worn and faded (Figure 2). As drivers, we sometimes make judgment calls on where and how to drive when lane markings are worn away or when weather conditions inhibit visibility. Additionally, construction zones may employ temporary lane markers to create new traffic patterns, and emergency responders sometimes direct traffic using hand signals, overriding road signs and traffic lights. Sometimes, we might need to avoid a pothole or debris in the road by using an adjacent lane.

Figure 2: Difficult road conditions are even more challenging for autonomous vehicles than they are for us (Source: Steven/stock.adobe.com)
All these situations require drivers to quickly assess the situation and potentially "break the rules" of established lane markings. The advantage of Level 2 and 3 systems is that the driver can take over if the vehicle does not understand the situation. But Level 4 vehicles won’t have a driver to rely on when operating in autonomous mode. Current Level 4 driverless taxis perform well within their geofenced neighborhoods in ideal conditions; however, construction zones sometimes confuse them, as seen in the news coverage of empty cars driving themselves into construction sites.
The engineering challenge for autonomous vehicles is not in getting them to cruise on the road; programming a robot to follow a line is trivial by today’s standards. The real challenge is preparing the vehicles to navigate unique situations that they have not explicitly been trained for. This is where machine learning becomes critical. Scenarios recorded by vehicles already on the roads give automakers plenty of material to refine their algorithms. Automakers can then complete the feedback loop by pushing over-the-air updates to their fleets so the vehicles can learn from each other’s experiences. As the vehicles encounter more nontraditional driving situations like construction and emergency vehicles, they will have a deeper well of experience to draw from for future unique situations.
Computer vision enables vehicle cameras to identify other vehicles, pedestrians, road signs, trees, animals, and the countless other objects cars pass. Highly trained computer vision algorithms balance the limited processing power available on vehicles with the need for real-time inferences. This is once again an area where vehicles currently on the road can create a feedback loop of real-world data to refine algorithms.
Technological Marvels, Legal Complexities
Level 3 vehicles are just as complex from a legal perspective as they are from a technological perspective. By taking responsibility for accidents while the DRIVE PILOT system is active, Mercedes-Benz follows the convention set in the SAE standards: At Level 3 and above, the person in the driver’s seat is not considered the driver. The handoff of liability back and forth between the driver and the vehicle is unprecedented in the US. Regulatory bodies such as the NHTSA keep a close eye on autonomous technology developments to inform up-to-date regulations. Automotive engineers should understand the regulatory landscape and standards that govern their work.
Autonomous vehicles create interesting legal edge cases. For example, Tesla offers an option called Smart Summon, where the car can drive itself to the user from its parking spot. In the eyes of the law, the car’s user is still liable for any damage the car creates despite not actually being in the car.
The NHTSA issued a standing order in 2021 that all crashes involving a vehicle with active Level 2 ADAS features where airbags were deployed, someone was injured, or a pedestrian or cyclist was involved must be reported for investigation. Although crashes involving autonomous vehicles make up a small portion of overall collisions, these stories become lighting rods for discussions on autonomous features. Debate around the safety and liability of autonomous features in the wake of accidents reflects public interest in the need for clear roles and responsibilities to be defined between automakers and drivers.
Understanding driver and vehicle responsibilities is also important for Level 2-enabled vehicles, where drivers may become complacent and act as if they’re in a Level 3 or 4 vehicle. Recent viral videos feature drivers reading, sleeping, eating, and more while their Level 2 cars cruise on the highway. Automakers must make the limitations of their autonomous features clear to drivers, and drivers must understand that they are still ultimately in control most of the time. DRIVE PILOT customers must watch an instructional video before they are allowed to use the feature. Similar mandatory instructions will likely become more common as more automakers assume liability for their Level 3 and eventual Level 4 features.
Where Next?
Level 3 autonomous vehicles are an important milestone on the journey to making roads safer and more efficient. While this cutting-edge technology serves limited areas for now, Level 1 and 2 ADAS features are becoming more prevalent globally. Smarter vehicles that take a more active role on the road push both technological and legal boundaries. The road to Level 4 consumer vehicles will involve refining what we learn from Level 3 and commercial Level 4 vehicles, as well as developing a robust regulatory framework.
Sources
Jeff S. Bartlett, “How Much Automation Does Your Car Really Have?,” Consumer Reports, November 4, 2021, https://www.consumerreports.org/cars/automotive-technology/how-much-automation-does-your-car-really-have-level-2-a3543419955/.
German Insurance Association, “Takeover times in highly automated driving.” July 2016, https://www.udv.de/resource/blob/74800/da7cc0636400bc5c4730ba0fd3c443b0/57-e-uebernahmezeiten-beim-hochautomatisieren-fahren-data.pdf.
Michael Cantu, “Ford Engineers Can’t Stay Awake in Driverless Cars,” MotorTrend, February 20, 2017, https://www.motortrend.com/news/ford-engineers-cant-stay-awake-in-driverless-cars/.
Jeff Peek, “Rearview Cameras Have Been around Longer than You Think,” Hagerty Media, May 3, 2023, https://www.hagerty.com/media/automotive-history/looking-back-rearview-cameras-have-been-around-longer-than-you-think/.