Two crashes involving the same type of aeroplane have sent shockwaves around an industry that is famous for still being the safest way to travel. Following the Boeing 737 Max disaster in Indonesia last October and the second one in Ethiopia on 10th March, the entire fleet of what had up until then been the company’s fastest-selling airplane has been grounded globally. The deaths of 346 people will tend to have that effect.
Air crash investigations are still underway to determine the cause, but cockpit recordings from the first crash and leaks from the second incident have all pointed at the same potential explanation - a Manoeuvring characteristics augmentation system (MCAS) designed to prevent the plane stalling when making steep turns under manual control.
Sensors in the front of the plane detect its angle of trim. If it is climbing too steeply, there is a risk of insufficient lift so the plane is put into a dive to prevent this. It is the type of automated safety system you would expect to find in one of the most complex pieces of machinery in use.
But two problems seem to have led to the catastrophes, both of them worrying for the future of automation and artificial intelligence. The first relates to how data from those nose sensors is collated and arbitrated. It seems that there was a conflict between the information being detected. Boeing evidently is aware of this possibility, as it offers an indicator system to alert pilots if this is happening. Since the crashes, this is being installed free of charge, instead of being left to individual airlines to choose and pay for.
Secondly, there is the automated response to that information which puts the plane’s nose down. During the first crash, it has been reported that this happened some 20 times in a row. The flight crew were wrestling against the automated system and even resorted to reading the training manual to try to discover how to over-rule it.
In the wake of the final reports on these failures, there will undoubtedly be new rules applied and potentially legislation to back them up. But this should not be allowed to rest as a single issue relevant only to air travel, since there are fundamental principles in play that need to be considered and acted on across every domain of automation and AI.
The first issue is the reliability of data. It tends to be assumed that data from sensors in connected devices and the internet of things is absolute, reliable and has integrity. Yet sensors can fail or be incorrectly calibrated, or analytics at the edge can misinterpret data or be running the wrong model.
Boeing would not need to create an indicator that this is happening unless it had discovered during the testing phase that sensors can disagree with each other. A flashing light in the cockpit now looks like scant response to what could potentially be an underlying engineering flaw.
With the rise of connected cars and even automated vehicles, serious debate needs to be had about the acceptable level of data quality and failure rates. This is especially critical as manufacturers seek to use the lowest-cost technology available within their machines. So while test vehicles may currently be using cutting-edge Lidar to detect the road and other traffic, production vehicles may end up with much less in the way of sensors - major car companies are already reported to be baulking at the cost of the best radar in all but their most luxury models.
During the coming mixed-traffic phase, how comfortable are we to allow automation at lowest cost to determine whether an accident might be about to happen? And who shoulders the responsibility when insurers are already telling manufacturers they will be totally liable, yet those same car makers depend on downstream OEM equipment to support their connected vehicles?
The second issue is around the automation of risk and deploying artificial intelligence to make decisions which could have fatal consequences. Inside those two Boeing 737 Maxes, flight crews found themselves fighting to save their passengers and themselves in a battle against a machine that appears to have decided it knew better than they did that there was a risk to the flight.
Autopilots have long been in use during relatively straightforward passages of flying and many planes are even capable of fly-by-wire take-offs and landings. Humans are still required to be present, however, presumably as the ultimate arbiters of the situation. Yet in Indonesia and Ethiopia they were over-ruled.
To give the public confidence in connected and automated vehicles, there will need to be clear resolution of this problem and a commonly-agreed set of rules. If you are stepping into a mode of transport and trusting your life to the machine itself, then you need to understand how it will interpret and arbitrate risk.
That is clearly going to require the sort of public conversation - and leadership - that is only just starting. If it sounds challenging, then just ask yourself this: the next time you are taking a flight, if the plane in use is a Boeing 737 Max, what will it take for you to feel safe?