19 December 2016

The Guardian: “Crash: how computers are setting us up for disaster”

The source of the problem was the system that had done so much to keep A330s safe for 15 years, across millions of miles of flying: the fly-by-wire. Or more precisely, the problem was not fly-by-wire, but the fact that the pilots had grown to rely on it. Bonin was suffering from a problem called mode confusion. Perhaps he did not realise that the plane had switched to the alternate mode that would provide him with far less assistance. Perhaps he knew the plane had switched modes, but did not fully understand the implication: that his plane would now let him stall. That is the most plausible reason Bonin and Robert ignored the alarm – they assumed this was the plane’s way of telling them that it was intervening to prevent a stall. In short, Bonin stalled the aircraft because in his gut he felt it was impossible to stall the aircraft.


Gary Klein, a psychologist who specialises in the study of expert and intuitive decision-making, summarises the problem: When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.

Tim Harford

A problem that will only be exacerbated by the migration to self-driving cars on the roads and software assistants all around us: as more and more tasks are delegated to algorithms, people loose the ability to make fast and accurate decisions. Without practical experience, drivers of self-driving cars will not know how to react in the fringe situations where AI fails, possibly leading to less crashes, but more severe and harder to prevent.

An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?

Update: Another, older example from the same airline industry: The Hazards of Going on Autopilot.

Post a Comment