Dashcam video shows Tesla steering toward lane divider—again

The afternoon commute of Reddit user Beastpilot—who requested that we not use his real name—takes him past a stretch of Seattle-area freeway with a carpool lane exit on the left. Last year, in early April, the Tesla driver noticed that Autopilot on his Model X would sometimes pull to the left as the car approached the lane divider—seemingly treating the space between the diverging lanes as a lane of its own.

This was particularly alarming, because just days earlier, Tesla owner Walter Huang had died in a fiery crash after Autopilot steered his Model X into a concrete lane divider in a very similar junction in Mountain View, California.

Beastpilot made several attempts to notify Tesla of the problem but says he never got a response. Weeks later, Tesla pushed out an update that seemed to fix the problem.

Then in October, it happened again. After months of working correctly, Beastpilot’s Tesla started swerving toward the lane divider in the same spot. Again, he reported the problem but says he never got a response. Weeks later, the problem resolved itself.

This week, he posted dashcam footage showing the same thing happening a third time—this time with a recently acquired Model 3. After working correctly for months, his new Tesla shows a tendency to swerve toward the lane divider as it passes this particular exit.

“The behavior of the system changes dramatically between software updates,” Beastpilot told Ars in a Thursday phone interview. And he argues that makes the system dangerous.

“Human nature is, ‘if something’s worked 100 times before, it’s gonna work the 101st time,'” he said. That can lull people into a false sense of security, with potentially deadly consequences.

Autopilot swerved to the left just before Walter Huang died

Beastpilot’s experience may help to shed light on the circumstances of Huang’s fatal crash a year earlier. Federal investigators determined that Huang’s Model X “began a left steering movement” seven seconds before his fatal crash. Apparently believing it was now in an empty lane, the vehicle sped up from 62 to 70 miles per hour in the final seconds before it struck the concrete lane divider.

After Huang’s death, his family told local media that Huang had taken his car to the local Tesla dealership to complain that it had swerved toward that exact barrier on multiple occasions.

Tesla says it has no record of Huang raising these concerns, and the family’s claim did raise an obvious question: why would Huang have let down his guard in this spot if he’d noticed his car malfunctioning previously?

But Beastpilot’s experience suggests one possible explanation: his car’s behavior may have changed over time with successive software updates. Maybe after Huang raised his initial concerns, Tesla pushed out a software update fixing the problem—only to have the problem reappear with a later update. We may never know for certain, since Huang isn’t around to explain himself.

Tesla’s rivals rely heavily on simulation

Tesla has built Autopilot by training complex neural networks. This makes it difficult to test because no one fully understands how these networks work or can predict how their behavior might change.

But this is hardly a problem that’s unique to Tesla—or self-driving technology, for that matter. Companies developing complex software have long developed sophisticated infrastructure for regression testing. This infrastructure can run a new software release through a battery of thousands of tests to make sure that it doesn’t re-introduce bugs that were fixed by previous updates.

Self-driving companies like Waymo make heavy use of simulation for this purpose. When a Waymo car encounters a tricky or unusual situation, Waymo uses sensor data to build a simulated version of that same scenario. Waymo’s simulation software can not only rerun that exact scenario, it can also “drive” through modified versions: what if the car was moving 5mph faster? What if there was a parked car blocking the right-hand lane? What if that other car running a red light does so five seconds earlier or later?

Waymo has a library of thousands of situations like this and the ability to automatically check that each new software version behaves as expected in each scenario. This helps to ensure that new software versions never re-introduce bugs that were squashed by previous versions.

Does Tesla have a system like this? The company didn’t respond to multiple emails seeking comment for this story. It would be surprising if they didn’t have at least systems for simulation and regression testing. But as far as I know, Tesla has never publicly described this kind of testing system. And Beastpilot’s videos suggest that Tesla’s testing infrastructure has room for improvement.

Indeed, the nature of Tesla’s fleet might preclude the company from doing Waymo-level simulation and testing of its software. There are two key differences between Tesla’s fleet and the test fleets of Waymo and other companies working on full autonomy.

One is that Tesla cars lack lidar. Lidar produces three-dimensional point clouds that are helpful for building a highly accurate model of the world around a vehicle. Tesla, by contrast, relies mainly on cameras and radar, neither of which offers precise three-dimensional measurements. It’s not impossible to derive a three-dimensional model based on camera and radar data, but it’s certainly more difficult.

The other difference is that Waymo’s cars return to a Waymo-owned garage at the end of each day. While we don’t know exactly what happens inside the garage, it seems like a reasonable guess that Waymo’s garages are equipped to offload gigabytes of accumulated data from each vehicle, each night.

Tesla, by contrast, must sip data from its customers’ cars via cellular networks or its customers’ Wi-Fi connections. Beastpilot says that he tracks his Tesla’s data use, and it’s fairly modest—tens or sometimes hundreds of megabytes per day, even after many miles of driving. So Tesla may simply be unable to harvest enough data to produce fully realistic three-dimensional simulations of situations that give Tesla vehicles trouble.

Tesla could do a better job warning users about Autopilot’s limitations

Not every Tesla user is likely to experience the bug Beastpilot identified. Tesla rolls out different versions of Autopilot to different people, with some experimental versions only going out to a small fraction of users. Some Redditors report that a newer version—which Beastpilot says he hasn’t gotten yet—fixes Beastpilot’s swerving problem.

Tesla’s position has long been that Autopilot is merely a driver assistance system, not a technology for full autonomy. Drivers are expected to keep their hands on the wheel and their eyes on the road at all times. If drivers are paying attention, Autopilot defects shouldn’t lead to crashes.

But Beastpilot argues there’s a lot of room for improvement in Tesla’s approach to user education.

Each new software release includes release notes explaining how it differs from the preceding version. But Beastpilot says the notes for recent releases have been “super thin.” He shared with us a photo of the current release notes on his vehicle, which explain Tesla’s new “Dog Mode” and “Auto-folding mirrors based on location.” A note at the bottom states that “this release contains minor improvements and bug fixes.” Beastpilot says he doesn’t remember any notifications that the update would change the behavior of Autopilot.

When new Tesla owners enable Autopilot for the first time, they are required to read a lengthy warning that Autopilot is an experimental technology requiring constant supervision by the human driver. Beastpilot argues that Tesla should re-present this warning to drivers every time a software update changes Autopilot’s behavior.

[ufc-fb-comments url="http://www.newyorkmetropolitan.com/tech/dashcam-video-shows-tesla-steering-toward-lane-divideragain"]

Latest Articles

Related Articles