When a Tesla enthusiast was killed recently in an accident associated with the company’s Autopilot technology, the criticism came fast and furious. The Guardian, for one, asked, “Should Tesla be ‘beta testing’ autopilot if there is a chance someone might die? But just as quickly supporters of Tesla’s efforts emerged to tout the outstanding safety record of Autopilot. One notable example came from Peter Diamandis, the chairman of the XPrize Foundation who penned, “Upside Of Tesla’s Autopilot“. What’s troubling about that post — and other defenses of Tesla by smart people — is that they rely on bits of data that seem like facts upon which to build a case. They aren’t.
Here’s Diamandis: “I was pissed at the media. This is … about the facts, and the Media’s ‘negativity bias.’” He then goes on to cite some statistics that have been widely bandied about, thanks to a Tesla blog post that correctly called the death of Joshua Brown tragic. But to paraphrase Inigo Montoya from The Princess Bride: You keep using that data; I do not think it means what you think it means.
Let’s start with the basics. Tesla says that its cars have logged 130 million miles under Autopilot, with just the one fatality linked to the technology. (A full investigation of exactly what happened has yet to be completed, NHTSA is involved and more may be learned over time, but let’s just use that data point for now). Now, pop quiz: Approximately how many miles will Autopilot-ed Tesla’s go between fatalities? If you’re inclined to say 130 million miles, great. That’s not an awful way to start on a Bayesian inference of the true frequency of deaths. Unfortunately, it’s also nowhere near enough information to expect to find the true probability. Essentially, what you have is a “prior” based on that single data point.
But what if the next death occurs next week (heaven forbid)? Well, with just a few more miles we’d now say, “From what we know, death occurs about every 65 million miles.” By contrast, if it’s 170 million miles till the next incident, we’d be inclined to say, “Actually, it’s about every 150 million miles.” What we lack here is sufficient data to draw an even slightly reasonable conclusion. Once we have more – either in the form of more miles or more deaths (unfortunately likely both) — we’ll not only be able to calculate a truer “mean time between Autopilot-related fatalities” but also a reasonable standard deviation that gives us a sense into how those might be distributed over time.
Until then, the use of the 130 million-mile figure tells us little and borders on irresponsible. But Tesla also mentioned: “Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.” This has been taken by many to assume that (a) Tesla’s under Autopilot are twice as safe as the typical vehicle around the globe and (b) safer than the typical car on U.S. roads.
Again, they might be. Unfortunately, no one can say that. The “typical” vehicle on U.S. roads reached 11.5 years old in 2015. That makes it far older than the typical Model S, whose sales began in earnest 3 years ago. Autopilot equipped cars are newer still, as the feature was only rolled out at the tail end of 2014. So when comparing Autopilot-equipped Teslas to other cars, for starters one should limit the comparable vehicles to those produced and sold in the past 2 years. The demographics of luxury-car owners don’t mirror that of the population at large either; we’re not even looking at the same pool of drivers when comparing these numbers.
But there’s another, more substantial problem. Let’s go back to that statistic about worldwide auto fatalities: Model S under Autopilot is “twice as safe” at 130 million miles vs. 60 million, right? Now we’re running the risk of a far worse comparison than the “typical American car”. Anyone who has spent time in the developing world knows that niceties we take for granted here like airbags, ABS, modern crumple zones, et al. are often missing elsewhere. It’s not even a given you’ll find three-point seatbelts — or sometimes any belts at all. If a Tesla with Autopilot were only twice as good as the typical car driven by a human around the world at protecting occupants, it would be hard to get excited about the future of computerized-driving technology.
Fortunately, again, the lack of data is operative. As Autopilot matures, it will likely become far superior to human drivers: it won’t fall asleep, it won’t drive under the influence, it won’t be chatting on a phone and miss someone running a red light in front of it. It’s probable a future vehicle will make more cautious turns than humans, tailgate much less often, merge more safely, and so on. Right now, though, it’s fair to judge the state of the technology against other cars like Teslas.
The Insurance Institute for Highway Safety, it turns out, has studied which cars are the safest on the road for a while. It’s findings are illuminating. A report last year showed that there are nine vehicles from the 2011 model year with a driver death rate of zero. Three of the nine are luxury vehicles: the Audi A4 AWD, the Lexus RX350 AWD and the Mercedes GL AWD. The Mercedes M AWD comes in 11th on the list with two deaths per million registered vehicle years (count the number of cars sold, multiple each by the number of years it was registered, add it up and you get “registered vehicle years”).
Now you may be wondering how one of these vehicles compares to the Autopilot stats from Tesla. I’ll leave that work to Andrew Hires, a professor of neurobiology at USC. He tweeted: “It’s a tiny sample. Compare Audi A4. 120,394 vehicle years @ 10k mi/yr = 1.2B miles. Deaths = 0.” Estimating 10,000 miles per year is likely low given the average is typically estimated around 12,000 or more. But irrespective of whether that estimate was low, it’s approximately 10 times the miles Teslas have logged to date in Autopilot mode. Further, Hires added: “Statistically, its too early to say Autopilot is safer than average luxury vehicle. Hard to say for any lux vehicle!”
Hires was discussing the matter with Balaji Srinivasan, the CEO of 21.co and one of the smarter folks you’ll encounter. Srinivasan’s counter was: “But that premise also cuts the other way: too early to claim Autopilot is more dangerous.” And here’s the thing, he’s also right. Tesla CEO Elon Musk in his own tweet (that may come back to haunt him, incidentally) wrote: “Misunderstanding of what ‘beta’ means to Tesla for Autopilot: any system w less than 1B miles of real world driving.” He was addressing the use of thebeta label, which some took to mean Tesla was risking people’s lives with unproven technology.
But Musk said something else important: At one billion miles Tesla will have refined the software a lot more – at minimum the machine-learning engine powering it will have gleaned another 7x as much data – and we will have a much bigger sample size than before. At that point, every rational person looking at this will hope the death toll will still be at one but whatever it is, we’ll be much more able to calculate a mean time between fatal accidents. If the current 130 million figure is even remotely close to the real number, sometime in the low single-digit billions of miles we’ll even have a data set that statisticians might trust to say with perhaps a 95% confidence what that mean looks like.
That day, however, is somewhat far away. Even with Tesla’s continued sales growth and the (likely) continued popularity of Autopilot, it could be some time before all that data is in. If the government’s investigation leads the company to restrict the use of Autopilot at all, we’ll be waiting even longer. Timothy Carone, a teaching professor in the Department of IT, Analytics, and Operations in the University of Notre Dame’s Mendoza College of Business, had some sobering thoughts on the subject:
“This is the first known fatality and as our society transitions to using more systems like driverless cars, pilotless airplanes, driverless trucks and trains, and weapons, we will start to see more and more of these … deaths and the destruction of property that to us appears unfair and arbitrary. The number of fatalities associated with the use of these autonomous systems will start to rise as more and more are used. Eventually, the number of fatalities and injuries will flatten out and decrease as these systems … begin to mature and become capable of handling unusual situations that are difficult to simulate in test environments,” he said.
Carone even foresees a time when human driving skills deteriorate where things might seem worse. “There will be a time when the autonomous system is not dealing correctly with a problem … yet the humans will have lost their expertise and will not be able to take over from the autonomous system to prevent a tragedy,” he said.
Still there is ultimately reason for optimism over the long haul. For all the mistakes humans make behind the wheel, there is barely one death every hundred million miles. That’s data we know and should feel pretty good about. That this doesn’t prevent more than 30,000 Americans each year from dying in auto crashes, however, means there is so much more tragedy, heartache and cost than can be avoided if and when Autopilot and similar technologies are perfected. It’s for that reason that I called for continued development of the technology in the previous post on this. But let’s not mistake what we know now for an indication of just how ready Autopilot is to make us safer. We’re lacking information on that and the more we keep repeating partial facts, the less we understand just how much we still have to learn
By: Mark Rogowsky