Elon Musk is angry at the media for focusing on the Autopilot-related crash that occurred in March in Mountain View, California—while ignoring what he views as the clear safety benefits of Autonomous vehicles.
“They should be writing a story about how autonomous cars are really safe,” Musk said in his Wednesday earnings call.
Musk believes that negative media coverage of Autopilot puts lives in danger.
“People are reading things in the press that cause them to use Autopilot less; that makes it dangerous for our customers,” he said on Wednesday. “That’s not cool.”
So what’s Tesla’s evidence that “autonomous cars are really safe”?
A few days after the Mountain View crash, Tesla published a blog post acknowledging that Autopilot was active at the time of the crash. But the company argued that the technology improved safety overall, pointing to a 2017 report by the National Highway Traffic Safety Administration (NHTSA).
“Over a year ago, our first iteration of Autopilot was found by the US government to reduce crash rates by as much as 40 percent,” the company wrote. It was the second time Tesla had cited that study in the context of the Mountain View crash—another blog post three days earlier had made the same point.
Unfortunately, there are some big problems with that finding. Indeed, the flaws are so significant that NHTSA put out a remarkable statement this week distancing itself from its own finding.
“NHTSA’s safety defect investigation of MY2014-2016 Tesla Model S and Model X did not assess the effectiveness of this technology,” the agency said in an email to Ars on Wednesday afternoon. “NHTSA performed this cursory comparison of the rates before and after installation of the feature to determine whether models equipped with Autosteer were associated with higher crash rates, which could have indicated that further investigation was necessary.”
Tesla has also claimed that its cars have a crash rate 3.7 times lower than average, but as we’ll see there’s little reason to think that has anything to do with Autopilot.
This week, we’ve talked to several automotive safety experts, and none has been able to point us to clear evidence that Autopilot’s semi-autonomous features improve safety. And that’s why news sites like ours haven’t written stories “about how autonomous cars are really safe.” Maybe that will prove true in the future. But right now the data just isn’t there.
One thing we know works: emergency braking
Talking about the safety of Autopilot is tricky because the term gets used in two different ways. Technically, Autopilot is an umbrella term for a variety of safety and driver-assistance technologies in Tesla’s cars. That includes front collision warning (FCW) and automatic emergency braking (AEB) features that come standard with every new Tesla. Autopilot also includes optional lane-keeping and adaptive cruise control features that cost an extra $5,000.
In practice, however, people who talk about “Autopilot” are usually talking specifically about the second set of features—technology that lets a car drive itself (albeit with human supervision). That’s clearly what Musk was referring to when he talked about the risk that drivers would “use Autopilot less.” NHTSA’s report specifically focused on Autosteer, the official name for Autopilot’s lane-keeping function, but Tesla’s blog post summarized it as finding that “Autopilot” had reduced crash rates.
So in this article, we’ll use “Autopilot” the same way Musk and Tesla have: to describe Tesla’s optional lane-keeping and adaptive cruise control technologies—but not the automatic emergency braking and front collision warning features included with every car.
One of the big problems with NHTSA’s finding of a 40-percent reduction in crashes is that most—perhaps even all—of that reduction may actually be thanks to AEB and FCW rather than Autopilot. Tesla cars were shipped with Autopilot hardware starting in October 2014, but the Autosteer functionality wasn’t activated until October 2015. NHTSA compared crash rates before and after this activation date to see how the crash rate changed.
But Tesla activated automatic emergency braking and front collision warnings a few months earlier, in March 2015. And we know that these technologies have significant safety benefits. A good source here is the Insurance Institute for Highway Safety, a research organization sponsored by the insurance industry that has unparalleled access to insurance claim data. IIHS data (PDF) shows that, across the car industry, the combination of AEB and FCW has reduced claims claims for damage to other vehicles by 13 percent and claims for injuries to people in other vehicles by 21 percent. The technologies reduced front-to-rear crashes by 50 percent.
Unfortunately, NHTSA doesn’t seem to have tried to disentangle the effects of Autopilot from AEB and FCW. Hence, it’s possible that most—even all—of the safety benefits NHTSA observed were actually the result of AEB and FCW rather than Autopilot. True, 40 percent is larger than 21 percent or 13 percent. But we don’t know the confidence interval for that 40-percent measure. And NHTSA used a measure—airbag deployments—that might not be directly comparable to the ones IIHS used.
And, interestingly, IIHS told Ars this week that it has observed a 13-percent decline in Model S claim rates since Autopilot features went into effect in October 2015. Unfortunately, Tesla hasn’t provided IIHS with data on which vehicles have paid for the Autopilot upgrade, so this statistic is lumping together cars with and without Autopilot. That means it’s not a direct measurement of Autopilot’s safety benefits.
Still, 13 percent is roughly how much we’d expect claims to decline if Tesla had only activated FCW and AEB—suggesting that Autopilot’s added safety could be small or even nonexistent.
Tesla could settle the debate by releasing more data
Tesla has a ton of data about its vehicles. It knows how many miles each car has driven, whether or not a customer has paid to activate Autopilot, and how many crashes each vehicle has had. If Elon Musk truly wanted to settle the debate over Autopilot safety, Tesla could release the full dataset in anonymized form, allowing independent experts to analyze the data.
Musk could also work with the IIHS—as a number of other automakers do—allowing IIHS researchers to look up which vehicle identification numbers have Autopilot enabled. That would allow IIHS to determine the Autopilot status of each car involved in an insurance claim, enabling rigorous analysis of whether Tesla’s Autopilot-enabled vehicles generate insurance claims at a lower rate than non-Autopilot vehicles.
Instead of providing this kind of data to researchers, Tesla has gone out of its way to block access to it.
NHTSA has broad authority to obtain data from carmakers, power it used in its 2017 investigation of Autopilot safety. Tesla provided NHTSA with the data but asserted that it was confidential. When an independent consulting company called Quality Control Systems sought the data in a freedom-of-information request, NHTSA denied the request, citing Tesla’s confidentiality request.
QCS has sued NHTSA for access to the data, which the group argues should not be considered confidential. The issue is currently being litigated.
“I don’t see that Tesla is helping themselves by keeping these data secret,” said Randy Whitfield, the director of QCS. “I hope it’s true that this kind of technology can be beneficial. But if it is, you’re going to have to get people to use it; they’re going to want to trust it first.”
To sum up: Tesla’s main piece of evidence for Autopilot’s safety is a NHTSA report that NHTSA itself now downplays as merely a “cursory comparison.” That study didn’t make a serious effort to distinguish the safety benefits of Autopilot from other safety technologies on Tesla vehicles. And when an independent researcher tried to obtain NHTSA’s data to do his own analysis, NHTSA blocked access, citing Tesla’s confidentiality request.
It’s worth noting that Tesla hasn’t been shy about releasing data when doing so helps its case. Indeed, Tesla has been so aggressive about releasing information about the Mountain View crash that it led to a high-profile fight with the National Transportation Safety Board, an investigative agency that requires companies to maintain confidentiality during the course of an investigation.
When Tesla’s Model S received an unfavorable review from John Broder of in 2013, Musk pulled data about Broder’s trip to debunk Broder’s story (Broder responded here.) In short, Tesla is more than happy to use data from Tesla cars when doing so bolsters Tesla’s case. So it’s curious that the company has been so reticent about publishing data to substantiate its claim that Autopilot dramatically improves safety.