Self-driving in the dark

Paul van Gerven is an editor at Bits&Chips.

Leestijd: 3 minuten

According to a US government report, the truck should have been visible for at least seven seconds, yet Joshua Brown did nothing to avoid a collision. And, crucially, nor did his Tesla, though Brown apparently trusted it to do so. The car had intermittently urged its occupant to take back control of the vehicle with visual and audio signals. Brown complied every time by putting his hands on the steering wheel, but only briefly, so as only to turn off the alarm. It cost him his life.

The crash in 2016 kicked off a debate about the safety of Tesla’s self-driving features and the way the company goes about deploying them, which is still going on. While most car manufacturers now try to downplay expectations for autonomous driving, saying it will take at least another decade, Tesla is taking a much more aggressive approach. Elon Musk often makes rather bold claims about the capabilities of Tesla’s current self-driving technologies and even bolder promises about future features. Earlier this year, he predicted that his company would be able to start operating a fleet of driverless ‘robo-taxis’ by the end of 2020, for example.

Tesla’s ‘release-it-now-fix-it-later’ approach to rolling out self-driving even prompted accusations that it’s using customers, such as Joshua Brown, as crash test dummies. By making the system seem more capable than it is and by not enforcing constant alertness of ‘drivers’, it’s putting lives at risk, some argue. (Currently, there are four confirmed Tesla Autopilot deaths in the US).

This article is exclusively available to premium members of Bits&Chips. Already a premium member? Please log in. Not yet a premium member? Become one and enjoy all the benefits.


Related content