What does ‘autonomous driving’ mean?
An autonomous car is a vehicle capable of sensing its environment and operating without human involvement. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. An autonomous car can go anywhere a traditional car goes and do everything that an experienced human driver does.
Levels of Driving Automation
- No automation: Manual control. The human performs all the driving tasks (steering, acceleration, breaking, etc.).
- Driver assistance: The vehicle features a single automated system (example: it monitors speed through cruise control).
- Partial automation: The vehicle can perform steering and automation. The human still monitors all tasks and can take control at any time.
- Conditional automation: Environmental detection capabilities. The vehicle can perform most driving tasks, but human override is still required.
- High automation: The vehicle performs all driving tasks under specific circumstances. Geofencing is required. Human override is still an option.
- Full automation: The vehicle performs all driving tasks under all conditions. Zero human attention or interaction is required.
Autonomous vs Automated vs Self-Driving
The SAE (Society of Automotive Engineers) uses the term automated instead of autonomous. One reason is that the word autonomy has implications beyond the electromechanical. A fully autonomous car would be self-aware and capable of making its own choices. For example, you say “drive me to work” but the car decides to take you to the beach instead. A fully automated car, however, would follow orders and then drive itself.
The term self-driving is often used interchangeably with autonomous. However, it’s a slightly different thing. A self-driving car can drive itself in some or even all situations, but a human passenger must always be present and ready to take control. Self-driving cars would fall under Level 3 (conditional driving automation) or Level 4 (high driving automation). They are subject to geofencing, unlike a fully autonomous Level 5 car that could go anywhere.
There are different systems that help the self-driving car control the car. Systems that need improvement include the car navigation system, the location system, the electronic map, the map matching, the global path planning, the environment perception, the laser perception, the radar perception, the visual perception, the vehicle control, the perception of vehicle speed and direction, and the vehicle control method.
The challenge for driverless car designers is to produce control systems capable of analyzing sensory data in order to provide accurate detection of other vehicles and the road ahead. Modern self-driving cars generally use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. Waymo has developed a variant of SLAM with detection and tracking of other moving objects (DATMO), which also handles obstacles such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) technologies to aid localization. Typical sensors include lidar, stereo vision, GPS and IMU. Control systems on automated cars may use Sensor Fusion, which is an approach that integrates information from a variety of sensors on the car to produce a more consistent, accurate, and useful view of the environment. Heavy rainfall, hail, or snow could impede the car sensors.
Driverless vehicles require some form of machine vision for the purpose of visual object recognition. Automated cars are being developed with deep neural networks, a type of deep learning architecture with many computational stages, or levels, in which neurons are simulated from the environment that activate the network. The neural network depends on an extensive amount of data extracted from real-life driving scenarios, enabling the neural network to “learn” how to execute the best course of action.
Human Factor Challenges
Self-driving cars are already exploring the difficulties of determining the intentions of pedestrians, bicyclists, and animals, and models of behavior must be programmed into driving algorithms. Human road users also have the challenge of determining the intentions of autonomous vehicles, where there is no driver with which to make eye contact or exchange hand signals. Drive.ai is testing a solution to this problem that involves LED signs mounted on the outside of the vehicle, announcing status such as “going now, don’t cross” vs. “waiting for you to cross”.
Two human-factor challenges are important for safety. One is the handoff from automated driving to manual driving, which may become necessary due to unfavorable or unusual road conditions, or if the vehicle has limited capabilities. A sudden handoff could leave a human driver dangerously unprepared in the moment. In the long term, humans who have less practice at driving might have a lower skill level and thus be more dangerous in manual mode. The second challenge is known as risk compensation: as a system is perceived to be safer, instead of benefiting entirely from all of the increased safety, people engage in riskier behavior and enjoy other benefits. Semi-automated cars have been shown to suffer from this problem, for example with users of Tesla Autopilot ignoring the road and using electronic devices or other activities against the advice of the company that the car is not capable of being completely autonomous. In the near future, pedestrians and bicyclists may travel in the street in a riskier fashion if they believe self-driving cars are capable of avoiding them.
In order for people to buy self-driving cars and vote for the government to allow them on roads, the technology must be trusted as safe. Self-driving elevators were invented in 1900, but the high number of people refusing to use them slowed adoption for several decades until operator strikes increased demand and trust was built with advertising and features like the emergency stop button.
According to a 2020 study, self-driving cars will increase productivity, and housing affordability, as well as reclaim land used for parking. However, self-driving cars will cause greater energy use, traffic congestion and sprawl.
According to a 2020 Annual Review of Public Health review of the literature, self-driving cars “could increase some health risks (such as air pollution, noise, and sedentarism); however, if proper regulated, AVs will likely reduce morbidity and mortality from motor vehicle crashes and may help reshape cities to promote healthy urban environments.”