The rules of the road today are all focused around one key element: drivers. Licensing, insurance, traffic laws — everything assumes vehicles are operated under the control of a human.
For driverless vehicles, this presents a dilemma: How can you tell which car is at fault in an accident? Should we license and insure owners or manufacturers or the cars themselves? More importantly: How can self-driving and human-driven cars co-exist safely?
Before society will welcome autonomous cars en masse, we must answer those questions — and others — with certainty. People have expressed apprehension about self-driving vehicles and are unlikely to accept them if it is not clear that they are inordinately safer than human-driven vehicles. We’ve already seen incidents involving current driver-assistance technology where fault was unclear during months-long investigations, leading to consumer wariness.
This issue will become more acute as vehicles take on more of the driving tasks. Although crashes caused by human error kill more than one million people annually, it may only take a few fatal crashes of a fully autonomous vehicle, where fault is uncertain, to meaningfully delay or forever foreclose on the tremendous life-saving potential of this technology.
Governments around the world are recognizing the need to tackle these issues, and the U.S. has been proactive with pending self-driving vehicles legislation and new Department of Transportation Automated Vehicle Guidelines. Industry can be an important partner.
An important next step is to collaboratively construct industry standards that definitively assign accident fault and thereby prove the safety of driverless vehicles when collisions with human-driven vehicles inevitably occur. Clear standards of blame are critical, as the autonomous vehicle’s decision-making software (i.e. driving policy) can then be programmed to follow these agreed-upon standards.
In this scenario, the self-driving vehicle could not cause an accident that would be attributable to the vehicle’s system’s fault. We’ve proposed a model we call Responsibility Sensitive Safety (RSS), offering a safe and scalable approach to consider.
RSS is a formal, mathematical model for ensuring that a self-driving vehicle operates in a responsible manner. It provides specific and measurable parameters for the human concepts of responsibility and caution and defines a “safe state,” where the autonomous vehicle cannot cause an accident, no matter what action is taken by other vehicles.
The ability to assign fault is the key. Just like the best human drivers in the world, self-driving cars cannot avoid accidents due to actions beyond its control. But the most responsible, aware and cautious driver is very unlikely to cause an accident of his or her own fault, particularly if they had 360-degree vision and lightning-fast reaction times like autonomous vehicles will. The RSS model formalizes this in a way that avoids putting self-driving vehicles in danger of violating those same rules.
We’ll use the common rear-end collision to illustrate how this works. When two cars are traveling in the same lane, one behind the other, and the rear car crashes into the front car, the driver of the rear car is deemed to be at fault. Often this is because the rear car did not maintain a safe following distance and was unable to stop in time when the lead car braked suddenly.
If the rear vehicle was a self-driving car and employed the RSS model, this accident would never have happened. Using software that evaluates all actions against a comprehensive set of driving scenarios and rules of responsibility, the driverless car will continuously calculate a safe following distance wherein the autonomous vehicle maintains a safe state.
With a model like RSS, an self-driving vehicle’s system of sensors will collect and maintain definitive data of all activity involving the vehicle at all times, think of it like the “black box” in an airplane cockpit.
This vital data can be used to rapidly, conclusively determine responsibility for incidents that may involve an autonomous vehicle, but only if there are clear definitions of fault on which to compare the data. Such a model for safety could be formalized by industry standards organizations — and ultimately regulatory bodies — to establish clear definitions for fault. That can later be translated into insurance policy and driving laws.
There is little argument that machines will be better drivers than humans. Yet there is very real risk that self-driving vehicles will never realize their life-saving potential if we can’t agree on standards for safety. We believe self-driving vehicles can and should be held to a standard of operational safety that is inordinately better than what we humans exhibit today. And the time to develop those standards is now.
Amnon Shashua is CEO and CTO of Mobileye, an Intel company launched in 1999 with focused on making our roads safer, reducing traffic congestion and saving lives. Shashua holds the Sachs Chair in computer science at the Hebrew University of Jerusalem. Shashua is also senior vice president at the Intel Corporation.
Shai Shalev-Shwartz is vice president of technology at Mobileye. Shalev-Shwartz holds an associate professor position in the Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of Jerusalem. Shalev-Shwartz previously worked as a research assistant professor at Toyota Technological Institute in Chicago, as well as having worked at Google and IBM research.