Tesla has an Autopilot problem, and it goes far beyond the fallout from last month’s deadly crash in Mountain View, California.
Tesla charges $5,000 for Autopilot’s lane-keeping and advanced cruise control features. On top of that, customers can pay $3,000 for what Tesla describes as “Full Self-Driving Capability.”
“All you will need to do is get in and tell your car where to go,” Tesla’s ordering page says.
“Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed.”
None of these “full self-driving” capabilities are available yet. “Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction,” the page says. “It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval.”
But the big reason full self-driving isn’t available yet has nothing to do with “regulatory approval.” The problem is that Tesla hasn’t created the technology yet. Indeed, the company could be years away from completing work on it, and some experts doubt it will ever be possible to achieve full self-driving capabilities with the hardware installed on today’s Tesla vehicles.
“It’s a vastly more difficult problem than most people realize,” said Sam Abuelsamid, an analyst at Navigant Research and a former auto industry engineer.
Tesla has a history of pre-selling products based on optimistic delivery schedules. This approach has served the company pretty well in the past, as customers ultimately loved their cars once they ultimately showed up. But that strategy could backfire hugely when it comes to Autopilot.
Some experts doubt it’s possible to achieve full self-driving using Tesla’s hardware
The most obvious thing missing from Tesla’s cars, from an autonomy perspective, is lidar. The companies that have made the most progress toward fully self-driving cars—including Waymo, Uber, and GM’s Cruise—all have lidar on their cars.
Defying the industry consensus, Tesla CEO Elon Musk has repeatedly insisted that lidar is merely a “crutch” and that it’s possible to build fully autonomous vehicles using only cameras and radar.
But most industry insiders believe lidar plays an important—and probably essential—role. Cameras offer high range and resolution, but they’re not very good at estimating distances and they don’t work as well in low-light conditions. Radar provides precise distance and velocity measurements but at very low resolution.
Lidar occupies a kind of sweet spot between these two: it offers precise distance measurements not available from cameras while producing a much higher-resolution map of the surrounding area than can be provided with radar. And unlike cameras, lidar works as well at night as it does in the daytime.
Of course, humans drive cars using the cameras we call our eyes, so there’s no doubt it’s possible to do in principle. But it took millions of years for evolution to develop our own, far from perfect, spatial navigation skills. The question is whether Tesla—or anyone else—will figure out how to develop better-than-human driving software within the next decade or two. Many experts believe that lidar sensors provide an extra margin of safety that will allow driverless cars to be introduced years earlier than would otherwise be possible.
And that’s not the only deficiency of the “full self driving” hardware on Tesla vehicles, according Abuelsamid. “You have to have levels of redundancy that simply aren’t there in those vehicles,” Abuelsamid told Ars in an interview last week.
It’s better to think of driver-assistance technologies and full self-driving cars as distinct systems
Cars from Waymo and Cruise have redundant main computers, redundant braking and steering systems, and redundant power supplies. This ensures that if any single component fails, a backup system will be ready to take over, allowing the car to gracefully come to a stop at the side of the road without causing a crash.
In contrast, a Model S teardown last August found that its vehicles had a single Nvidia Drive PX 2 board with one SoC and one GPU. If one of those components fails, the car might not be able to recover gracefully.
A final issue is raw computing power. When Tesla introduced the “full self-driving” feature in late 2016, the Nvidia Drive PX 2 was cutting-edge technology. But last year Nvidia introduced a next-generation PX platform, codenamed Pegasus, that Nvidia claims has 13 times the computing power of the PX 2. Since no one has built a fully self-driving car yet, it’s not known how much computing power such a system will ultimately require. Those old Nvidia chips might not be enough.
The five-level framework may be leading Tesla astray
The Society of Automotive Engineers (SAE) has developed a five-level conceptual framework for thinking about autonomous vehicles, ranging from level 1 for basic cruise control to level 5 for a car that can operate autonomously in all situations. (There’s also “level 0” to denote a car with no autonomous features.)
This framework encourages people to view driver-assistance systems and fully self-driving cars as two points on a continuous spectrum. That’s also the implicit assumption of Tesla’s Autopilot strategy. The company is selling a driver-assistance system today, but it plans to use software updates to gradually turn it into a fully self-driving system. But there’s growing reason to think this is a mistake—that it’s better to think of driver-assistance technologies and full self-driving cars as distinct systems.
In driver-assistance systems, a human driver is expected to pay attention 100 percent of the time and correct any mistakes the driver assistance system makes. In contrast, a fully self-driving system is built on the assumption that a human driver will never need to take over.
Systems in the middle—with human driver and software both sharing some responsibility—are a safety hazard. Once a self-driving system gets pretty good, humans start to trust it and stop paying attention to the road. This can happen long before the system is actually safer than a human driver, leading to more fatalities rather than fewer.
That’s the conclusion Google reached several years ago when it shifted from building driver-assistance technology to building cars that are designed to be fully autonomous from the start. The company’s current plan is to offer a driverless taxi service that won’t even allow passengers to take the wheel.
The decision to design cars to be self-driving from the ground up has had a profound effect on the way Google—now Waymo—has approached the driverless car problem. Waymo realized that the key to making this work was a different kind of gradualism: fielding a taxi service that initially would only work on a limited number of streets and weather conditions. Waymo is planning to launch its service in the Phoenix area, which has some of the nation’s best-maintained roads, lightest traffic, and least-challenging weather. Once Waymo is confident that the technology is working well in this forgiving environment, it will gradually expand service to other parts of the country.
A big advantage of this business model is that it gives Waymo a lot of flexibility to replace and upgrade the sensors and other equipment on its cars over time. If it discovers that its initial set of sensors isn’t sufficient for fully self-driving operation, it can replace them with more powerful sensors—without worrying about charging customers for the upgrade. If it finds different kinds of sensors are needed to operate in snowy weather, it can install those on cars operating in snowy parts of the country. If it can’t figure out how to get fully automated driving to work in a particular region, it can focus on expanding its taxi service to other regions first.
Waymo’s technology depends heavily on gathering high-resolution maps of the areas where it’s operating. The company can do this one city at a time, using revenues from early cities to finance later expansions.
Tesla’s business model gives it much less flexibility. Launching a driverless car feature that initially only works in Phoenix would make every Tesla customer not located in Phoenix angry. But offering full self-driving capabilities nationwide might require a massively expensive effort to collect or purchase map data for the whole country.
If Tesla finds that its current hardware isn’t sufficient for full self-driving operation, it will face an awkward choice between charging customers for an upgrade (after promising that the old hardware would be adequate) or paying those costs itself. If a sensor fails, Tesla will have to choose between disabling self-driving capability until the customer repairs it or allowing the car to continue operating with a higher risk of a crash.