Paul van Gerven

Paul van Gerven is an editor at Bits&Chips.

27 November 2019

According to a US government report, the truck should have been visible for at least seven seconds, yet Joshua Brown did nothing to avoid a collision. And, crucially, nor did his Tesla, though Brown apparently trusted it to do so. The car had intermittently urged its occupant to take back control of the vehicle with visual and audio signals. Brown complied every time by putting his hands on the steering wheel, but only briefly, so as only to turn off the alarm. It cost him his life.

The crash in 2016 kicked off a debate about the safety of Tesla’s self-driving features and the way the company goes about deploying them, which is still going on. While most car manufacturers now try to downplay expectations for autonomous driving, saying it will take at least another decade, Tesla is taking a much more aggressive approach. Elon Musk often makes rather bold claims about the capabilities of Tesla’s current self-driving technologies and even bolder promises about future features. Earlier this year, he predicted that his company would be able to start operating a fleet of driverless ‘robo-taxis’ by the end of 2020, for example.

Tesla’s ‘release-it-now-fix-it-later’ approach to rolling out self-driving even prompted accusations that it’s using customers, such as Joshua Brown, as crash test dummies. By making the system seem more capable than it is and by not enforcing constant alertness of ‘drivers’, it’s putting lives at risk, some argue. (Currently, there are four confirmed Tesla Autopilot deaths in the US).

Brown’s own family won’t have any of it, however. “We heard numerous times that the car killed our son. That’s simply not the case. There was a small window of time when neither Joshua nor the Tesla features noticed the truck making the left-hand turn in front of the car. People die every day in car accidents. Many of those are caused by a lack of attention or the inability to see the danger. Joshua believed, and our family continues to believe, that the new technology going into cars and the move to autonomous driving has already saved many lives. Change always comes with risks, and zero tolerance for deaths would totally stop innovation and improvements,” said a statement released well over a year after the accident, after all official inquiries had been completed.

Personally, I feel the blame primarily rests with Brown himself, but Tesla does share in it. It should have been more cautious and responsible, at the very least by disallowing hands off the wheel at this stage of technological development. Yet, as the Brown family argues, if autonomous driving is ever going to be a reality, we’ll probably have to accept some deaths caused by machine error in order to save many more lives in the long run.

The key question is this: at what performance level should we allow (partially) autonomous driving systems on the road? These systems will eventually drastically reduce the number of road accidents, but first, they need to rack up the miles to be perfected. If we proceed too cautiously, we might prevent accidents like Brown’s, but by delaying the advent of the autonomous driving era, there would be many more fatalities overall. If we proceed too brashly, the public backlash will probably result in more delays, and hence more deaths as well.

So, there’s a fine balance to strike here, complicated by the fact that we simply don’t know how good the systems are at any point in time. That would take many (semi-)autonomous miles to assess. Basically, we’re self-driving in the dark. We’ll have to deal with it like approaching a blind junction: no point in coming to a full stop, but stepping on the gas would be foolish.