Potential liability ramifications of self-driving cars

August 31, 2016

A self-driving car by Google is displayed at the Viva Technology event in ParisUntil recently, the self-driving car was a dream reserved for science fiction. But as technology progresses and this dream becomes a reality, questions of safety and liability must be answered to give guidance to users and manufacturers.

The traditional system for compensating those injured in motor vehicle accidents must adapt to these changes in technology or play a role in stymieing its progress.

Historical background

The dream of a self-driving car started almost as soon as the first cars hit the road. In the 1930s, the idea was proposed to create an automatic highway system that would allow cars to operate while the driver relaxed. By the 1950s, the ability to self-drive on automated highways was being tested on ordinary General Motor cars.

Congress realized the importance of this technology and in 1991, passed a bill directing the Department of Transportation to develop an automated vehicle and highway system by 1997.

Although Congress’s goal was not then met, the quest for a self-driving car continued. Instead of creating a new highway system, efforts shifted to creating a car that would operate autonomously on existing roads.

The problem innovators faced was creating artificial intelligence that would be able to process information and make decisions that drivers face on the road. Just as Congress saw the appeal of self-driving vehicles, entrepreneurs started investing in a broader application of the technology in hopes of eventually reaching the consumer market.

Private research and testing of autonomous passenger vehicles started to ramp up in the 2010s. Google’s first self-driving car in 2012 was able to sense where it was using mapping technology and recognize each type of feature it encountered, such as another car or human, and adjust its behavior accordingly.

This breakthrough allowed the car to “process” information in the same way a driver does, but without the chance for human error. Google’s cars drove more than 1.4 million miles before being involved in an accident that was the driverless car’s fault.

Automobile manufacturers sought to enter the market too. In 2013, a Mercedes S-class drove completely autonomously for 100 kilometers in Germany. In 2014, Tesla announced a car that was able to autonomously steer, brake and park. In 2015, a car designed by Delphi Automotive became the first automated vehicle to drive from coast to coast in the United States. The first wave of autonomous vehicles is expected to enter the market for consumers to purchase by 2020.

Human control

While not fully driverless, regular cars actually have had some of these features for years, such as adaptive cruise control, automatic braking, lane keeping assist and blind spot assistance. This progression towards decreased human control and increased vehicle responsibility has led to some disagreement among manufacturers.

Google believes the safest approach is no human involvement at all. Google has developed a self-driving pod car without a steering wheel or pedals. The idea is that human error causes the overwhelming majority car accidents and removing the human element entirely will be safer.

Tesla has taken the opposite approach. The driver must stay fully engaged even when the car has taken over control. When Tesla launched its AutoPilot system, it stressed that the feature was not fully autonomous and the driver still must be in control and responsible for the vehicle.

For example, during the Tesla car’s test trip across the United States, there was a time when the vehicle was driving fast and the road curved. Had the driver not taken control, the car would have ultimately gone off the road.

In Tesla vehicles, the driver must touch the wheel every few seconds, otherwise the car will beep and eventually come to a stop. In fact, failing to periodically place hands on the wheel violates the terms drivers agree to when enabling the feature. Mercedes’s Intelligent Drive System takes it further, requiring hands on the wheel at all times.

But even with these policies in place, there will inevitably be accidents. In March 2016, Tesla reported its first autonomous driving fatality.

The driver of a Tesla Model S drove into the trailer of a semi-truck on a highway. The car’s sensors apparently failed to detect the white reflection of the truck against the sky. Allegedly the driver of the Tesla was watching a movie when the crash occurred.

Under Tesla’s approach to self-driving cars, the driver should have been able to override the system when it became clear that the vehicle was not going to stop before colliding with the trailer.

Even though Tesla announced the accident as a statistical inevitability and noted that regular cars yield a fatality more frequently, the question remains: who is responsible when these systems inevitably fail?

Legal framework

Currently there is no special framework for assessing liability with self-driving cars. Because new laws are slow to develop as cases work their ways through the trial and appellate courts, self-driving cars must try to fit into the existing system for the time being.

With simple car accidents, drivers and vehicle owners are held liable for the accidents they cause. Insurers are behind the scenes paying the claims.

A driver has a duty to safely operate his or her vehicle. If the driver breaches this duty, he or she will be held liable for the damage caused. But with self-driving cars, numerous other parties will be thrown into the mix, including computer programmers, mapping companies, and automobile manufacturers.

For example, a driver may be liable for improperly using a vehicle’s features, a manufacturer may be liable for failure to warn, and a mapping company may be liable for providing incorrect roadway data. Assessing who is liable in that scenario causes problems not currently involved in simple car accidents.

One problem is that assigning blame to anyone other than drivers and owners can convert the claim from one of simple negligence to product liability. Strict product liability would make it easier to hold the manufacturer liable because the facts of the accident would be irrelevant. Unfortunately, a product liability case is also expensive because of its complicated nature.

An expert may give an opinion based on their scientific, technical, or otherwise specialized knowledge to help the trier of fact determine a fact in issue under Federal Rule of Evidence 702.

Utilizing expert testimony is costly and only the most catastrophic cases would be effective to bring. Ultimately this system would deny access to the civil justice system for smaller cases where the costs of litigating exceed the damages.

Even without cost considerations of potential products liability claims, the additional defendants will make cases more complex. Assigning blame becomes particularly challenging with cars that are not fully automated.

For example, in a Tesla vehicle, depending on the landscape of the state, juries will be forced to decide what percentage of fault to assign to a driver who fails to keep his or her hands on the wheel, the designer of the computer program that failed, and the manufacturer of the vehicle itself.

In the few remaining states that maintain contributory negligence, the distinction of fault is even more important. Any action or inaction the driver may have taken that contributed to the accident could potentially bar him or her from recovery.

In a car that is able to operate completely on its own, it is very likely drivers will become distracted and unable to refocus to override the system on the rare occasion it becomes necessary.

Weighing the evidence in these cases becomes even more crucial as the car will be able to provide its own “testimony.” In simple negligence cases, the drivers present their story and juries can determine the credibility of each witness. But juries may be inclined to believe the computer data more than human memory, putting the drivers involved in a difficult situation.

Plaintiffs would not be able to attack the credibility of the car in the same way it could a human under Federal Rule of Evidence 608, for example.

There is also the issue of spoliation. Federal Rule of Civil Procedure 34 has some provisions for the production of electronically stored information, but the data kept by the cars will be essential to these cases and there will be little room for error.

Courts will need to decide how to handle these evidentiary issues and litigants may spend years without guidance as these rules develop.

Legal strategy and potential solutions

The simplest solution to these challenges would be to hold the manufacturers strictly liable for any damages caused by their vehicles. Strict liability is not based on a warranty and the manufacturer would be liable for any defects, even if its quality control efforts satisfy the standards of reasonableness.

There would also likely not be an issue of privity, like in warranty claims. Any foreseeable user could recover, not just the owner.

Surprisingly, some manufacturers support strict liability. Volvo, Google, and Daimler AG’s Mercedes-Benz have all pledged to accept liability if their vehicles cause an accident.

Volvo has declared that it would pay for any injuries or property damage caused by its fully autonomous IntelliSafe Autopilot system. Volvo’s position is that the system will contain so many redundant and backup systems that human intervention should never been needed. As such, a human driver could never be at fault.

This would allow the civil justice system to continue as is, only with the manufacturer taking the place of the driver and insurance company in litigation. However, not all manufacturers support this idea, particularly with those vehicles that depend on some form of human intervention.

If product liability is too expensive and strict liability does not apply, litigants may be able to seek relief under contract law. But manufacturers may try to circumvent any contract liability with the use of disclaimers. Tesla has announced that a failure to keep hands on the steering wheel violates its terms and conditions.

Another approach could be the breach of an implied warranty of fitness for a particular purpose. Under the Uniform Commercial Code § 2-314, there is an implied warranty that goods be fit for the particular purpose for which the goods are required. The seller is also required to know this purpose and that the buyer to relying on the seller’s skill or judgment.

However, under this system a manufacturer is liable for harm caused by software flaws that are foreseeable as a class but neither preventable nor reasonably discoverable in their individual instance. This is a higher burden for the plaintiff to meet.

Additionally, manufacturers could still disclaim this warranty. Using contract law adds new legal theories to what is currently simple negligence, further complicating the litigation process.

Another approach to liability would be regulations at either the state or federal level. Many manufacturers are calling for legislators to create uniform regulations so they, and their customers, will know where the stand in terms of liability. But determining what those regulations will look like is not so simple.

Some states have proposed requiring all self-driving cars have a licensed driver behind a physical steering wheel at all times. California has proposed rules that would require drivers to always be ready to take the wheel.

But with the different types of driverless cars, this regulation would be difficult to apply broadly, particularly with cars that do not even have a steering wheel for drivers to take.

In Google’s car, National Highway Traffic Safety Administration (NHTSA) has recognized that the software, not the human, is the driver. This has wide-ranging implications for Google’s engineers.

NHTSA is actively involved in the development and adoption of safe vehicle automation and plans to propose guidance on establishing principles of safe operation of fully autonomous vehicles in mid-2016.

The shift of liability will also have an impact on who is ultimately paying the bill. Insurance companies are already addressing how to consider driverless cars in drafting new policies.

Insurance will face many of the same issues, including the proportion of blame to assign to the driver and to the car. In assessing the responsibility of the manufacturers, considerations must also be paid to not driving manufacturers and suppliers out of business.

While caps exist in some jurisdictions, those typically only apply to negligence claims, not strict liability. These caps could be extended to cover self-driving cars.

Some type of no-fault automobile insurance system has also been proposed to protect manufacturers as liability is transferred from drivers to manufacturers.

On the other hand, as technology improves, the hope is that self-driving cars will decrease the number of accidents and as a result the costs insurance companies have to pay.

Safety

Even with new regulations and legal framework, consumers will still be hesitant about driverless cars. 88% of adults worry about traveling in driverless cars and 52% fear hackers could gain control.

Manufacturers will have to convince consumers that their cars are safe and secure if self-driving cars are going to thrive. Ultimately, self-driving cars will be beneficial to society.

The technology can make recalls and safety improvement campaigns more effective. It could improve traffic conditions and provide better mobility to those otherwise impaired. It can even change the way people purchase vehicles. Multiple vehicles per family might not be necessary if members could summon the car when needed without a driver.

Although self-driving cars may be novel and intimidating at first, they will likely be safer because the failure rate is much lower than human error. Virginia Tech Transportation Institute researchers determined that the national crash rate of 4.2 accidents per million miles is higher than the 3.2 crashes per million miles of self-driving cars.

While the initial assumption is that autonomous cars would have a higher incident rate, the reality is the opposite. Their data also found that self-driving vehicles have lower rates of the most-severe crashes.

An ideal system will allow cars to drive automatically but also have the backup of a focused human driver. The problem with a hybrid system is that consumers may become distracted. People waste massive amounts of time on a daily basis commuting and may be tempted to use this time to multitask in hopes of boosting productivity. Once a driver is disengaged from the act of driving, it will be difficult for them to react quickly when necessary.

But this challenge is not insurmountable. In the same way airplanes use an autonomous system to fly, drivers need to assume the same type of responsibility as pilots. Passengers feel much better with a pilot in the cockpit ready for an emergency.

A focused driver and a self-driving car is likely the best way to use this new technology. It would keep the number of accidents low and give consumers the confidence to enter this unchartered territory.

Conclusion

As cars continue to become more autonomous, drivers need to be aware that vehicles are still dangerous and that they need to be alert at all times. The push towards driverless cars will only continue, and consequently, there will only be an increase in the number of accidents.

The civil justice system needs to anticipate these future needs and make sure the costs for pursuing recovery are not prohibitive for all but the most catastrophic accidents.

With an appropriate legal and regulatory framework in place, consumers and manufactures will be more comfortable with the new technology, while at the same time maintaining access to the courts for injured victims.