Technology Risk

Uber automation fatality is a bad decision

  • March 20, 2018

Volvo XC90

“Was a matter of time. Can we agree this is a stupid idea? #Uber @Potus”

….cried out the Tweet from Bill (and doubtless the thousands like him, soon to follow). It didn’t take long for the indignation to turn up, the “I told you so” attitudes that had been waiting for just this moment.

Around 10pm on Sunday 18th March 2018 in the Tempe district of Phoenix, Arizona, Elaine Herzberg was pushing her bicycle along then stepped out into the road. In terms of physics, she was no match for the large Volvo CX90 SUV travelling at 40mph (64km/h). As soon as she committed to crossing the road, her death was assured. The software and sensors driving the autonomous vehicle (AV) never saw her, or detected her change of direction, or calculated her pathway. This was the failure that companies like Uber, Waymo, Lyft and all the traditional auto-manufacturers had been dreading; the first human death caused by this new technology.

No other information about the incident is available yet, so we won’t speculate about the incident itself. Instead, let’s consider the decision that was subsequently made by Uber, to suspend all AV testing in Tempe and their other test locations in Pittsburgh, San Francisco and Toronto. As a precautionary reflex in the light of sudden failure, as a public and social acknowledgement of the important ramifications of the incident, as a sign of care and respect to all pedestrian human beings, it is of course a correct move to make.

Risk in the numbers.

But it can’t be allowed to solidify in the minds of regulators or the minds of the campaigning public. Removing such technology experiments from the roads feels like it is sensible risk aversion, but it is a fallacy. On a purely rational level, Uber has so far completed over 3 million miles of AV testing. By comparison, the US National Safety Council, reports a 2016 rate of 1.25 deaths per 100 million vehicle miles, which seems to underline the grievance against Uber – that is 26 times safer! However, we must be careful with numbers. In 2016, the death rate for motorcyclists (not even including other people affected) was 21 times worse than the all vehicle deaths baseline, so Uber’s AVs are already in the ball-park of a societally-accepted level of risk.

The key reason why any decision to permanently suspend AV testing has to do with learning. As Matthew Syed sets out in his excellent book, “Black Box Thinking”, progress is underwritten by continual exposure to failure. He points out that we can safely travel in an airplane because tens of thousands of people have died before us in air incidents, combined with the aviation industry’s structured and systematic dedication to learning from their mistakes and weaknesses. Black boxes are inserted in aircraft to ensure there is a reliable source of evidence to analyse and understand how errors occurred. Black boxes don’t avoid tragedies but they do ensure that deaths are significant and valuable.

Killing one risk can make another one worse.

We can be certain that Uber’s AV was full of recording equipment, constantly monitoring the multiple sensors looking out into the external operating environment and software interactions within. The first great risk of the tragic incident with Elaine Herzberg is that a knee jerk reaction takes place and we end up with a portfolio of risks which are far greater, than if we had persisted with this learning experiment.

Human beings are frequently terrible drivers; we get distracted, intoxicated, aggressive, tired. We make selfish or stupid decisions, we often have poor observations and lousy machine-handling skills. We kill when we drive. Not only that, but we drive in a way that pollutes by harsh use of the throttle, poor anticipation of other road-user’s actions. AVs are likely to produce far safer, consistent and reliable driving habits, reducing accidents, congestion and atmospheric pollutants. We must argue against the fallacy that the incident of a death by an AV, means we should not continue to attempt to reduce the much bigger risk of all vehicle related deaths.

The second big risk goes beyond Arizona.

The second great risk, is that regulators stand back and engage in blamescaping the technology companies. Arizona State consciously invited Uber to test there, selling itself as a lower regulatory environment than neighbouring California, with ideal weather conditions (low rain, no snow) for AVs to perform in. Regulators must strongly defend the taking of the first risk, of doing this testing. But they must also ensure that the risk is worthwhile, that deep, genuine, systematic learning takes place. Learning points must not become absorbed into one company’s intellectual property, but they should be documented and shared for the benefit of the whole AV industry and wider society. They must work to guarantee the learning outcomes don’t get distorted, ignored, or that they have such enormous, reputation costs for the tech companies, that it is impossible for an open and transparent learning culture to thrive.

This is not a time for knee jerk reactions, but ‘black box thinking’. There is a duty to make Elaine Herzberg’s death count, so that we all become smarter and safer.