The legal and ethical implications of self-driving cars

Sinu
Becoming Human: Artificial Intelligence Magazine
4 min readSep 21, 2017

--

After serious investment and innovation, news from Tesla in July 2016 was cringeworthy as headlines reported on the first fatality of a driver in a self-driving vehicle. Elon Musk, the CEO of Tesla, tweeted his condolences regarding the “tragic loss.” In his statement, he noted that this was Tesla’s first known autopilot death in roughly 130 million miles driven by customers and that “among all vehicles in the US, there is a fatality every 94 million miles.”

While Musk’s statement about death rates from people-driven cars is supported in a National Highway Safety Administration report, the company is clearly deflecting blame for the crash. Musk goes on to say that the car’s autonomous software is designed to alert drivers to keep their hands on the wheels. In fact, the terms of using autopilot mode require a driver keep their hands on the wheel.

“Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” the company said.

An instrument panel in a Tesla Model S P90D vehicle. Photo Credit: Christopher Goodney, Bloomberg, as reported by Miami Herald, 4/28/17.

But who is ultimately responsible when the car crashes?

According to the Miami Herald, automotive class action firm Hagens Berman Sobol Shapiro, which sued Volkswagen AG in the Dieselgate case, filed the first suit to focus on self-driving car technology against Tesla. The suit states: “Contrary to what Tesla represented to them, buyers of affected vehicles have become beta testers of half-baked software that renders Tesla vehicles dangerous if engaged.”

Tesla responded in a statement of its own: “This lawsuit is a disingenuous attempt to secure attorney’s fees posing as a legitimate legal action, which is evidenced by the fact that the suit misrepresents many facts.”

Bryant Walker Smith, a University of South Carolina law professor, wrote an exhaustive report earlier this year, “Automated Driving and Product Liability.” In it, he outlines the complicated nature of existing liability cases, writing: “More broadly, the plaintiff in a product liability case must typically demonstrate that she was harmed by a product defect — that is, a dangerous characteristic of a product. If this defect is the result of an imperfect production process, then the plaintiff may be able to recover from the manufacturer even if the production process was reasonable. However, if this defect is an aspect of the product’s design, then the plaintiff generally must show that the design itself was unreasonable, often by reference to a reasonable alternative design. Similarly, if the defect consists of incorrect or incomplete instructions for or warnings about the product, then the plaintiff must show that the information actually provided was unreasonable.”

The Washington Post distilled his report down into seven key take-aways including a shift from driver liability to product liability, making the automotive industry the primary liability stakeholders, and, when manufacturers imply their automated systems are at least as safe as a human driver, they may face a misrepresentation suit in cases that contradict that expectation.

Currently, accident liability remains with the owner and/or driver of the car who is — aside from factory-level mechanical malfunctions — ultimately responsible for driving safely. As the technology advances to completely automated driving, the debates will escalate and we will look to the courts, lawmakers, and a number of organizations, who are just now diving into the issues emerging around the ethics and liability of driverless cars — and Artifical Intelligence (AI) overall.

For example, just this past July, the Berkman Klein Center and MIT Media Lab as academic anchor institutions of the Ethics and Governance of Artificial Intelligence Fund announced funding for “nine organizations to amplify the voice of civil society in shaping the evolution of AI, bolstering efforts to promote the development of ethical, accountable systems that advance the public interest.” The Fund was launched earlier this year with a $27 million contribution by the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, the William and Flora Hewlett Foundation, and Jim Pallotta. The Berkman Klein Center and the MIT Media Lab will work in three initial core areas: media and information quality; social and criminal justice; and autonomous vehicles.

Two New York City organizations received some of this funding to address some of these important emerging issues. AI Now of New York will “undertake interdisciplinary, empirical research examining the integration of artificial intelligence into existing critical infrastructures, looking specifically at bias, data collection, and healthcare.” Data & Society will “conduct a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, and will explore how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. The research will produce empirical work examining these dynamics in order to facilitate the creation of effective regulation and ethical design considerations across domains.”

As with most technology, innovation often eclipses state and federal legislation, meaning you’d better buckle up for the court ride. For now, the automated road ahead remains somewhat murky as lawmakers, courts, and initiatives such the Ethics and Governance of Artificial Intelligence Fund try to address the questions around liability and ethics posed by AI, including self-driving cars.

--

--

Sinu is a technology managed service provider with offices in New York City and Washington DC. www.sinu.com