Why being rational about AI is not rational

David Reed, director of research and editor-in-chief, DataIQ

Imagine the scene. It is the final of the Man-Machine Mixed Martial Arts (© David Reed) tournament. On the floor lies an unconscious human combatant. Standing over him, the humanoid robot turns to the crowd and bellows, “I will crush you all, puny humans!”

That concept sums up most discussions about artificial intelligence (AI) because it contains the core elements of what that industry is currently doing: trying to build machines that operate alongside humans; making those machines as human as possible; and the fear that the machines will be better than us.

Automation is nearly always the purpose of technology development, with increased productivity the desired result. From repetitive tasks such as building cars in a factory through to filtering customer services messages and responding automatically to queries about changing a password with a link to the right page on a web site, machines are increasingly doing jobs that used to be done by humans. 

Machines run for longer, make fewer mistakes and, with the advent of AI, can take on an increasingly complex set of tasks. Deep learning means even processes that humans assume require their involvement, such as surgical procedures, could soon be carried out robotically. A human will likely still be present, ready to intervene, just as autonomous cars currently need a person on board to act as the fail-safe. But as the hours of operation without incident stretch into the millions, that requirement could disappear.

Humanising robots seems to be a parallel development path, not least because it will help to overcome resitance to adoption. Making a robotic carer that has empathy and is capable of smiling - quite a complex muscular exercise - will help to disguise the fact that human carers are being replaced.

But is this human face of machines really desirable? The problem with most attempts is that they fall into the “uncanny valley” - a term first coined in 1970 by Japanese robotics professor Masahiro Mori and translated into English in 1978 by Jasia Reichardt. It describes the way that something which is nearly human becomes revolting to us precisely because it is nearly, but not quite right. It is the same sensation which seeing a ghost might elicit - it looks human, but isn’t.

To reach the true potential of AI might require letting go of this pursuit and allowing machines to be like machines. In this scenario, however, two challenges arise. The first is our willingness to adopt a technology that might seem completely alien - a machine designed by a self-learning machine could end up resembling a snake, a spider, a jelly fish or something never seen before. Allowing it access to core human activities would require getting over our inate human fears.

The other challenge is data. All AI relies on the training data set which machine learning explores in order to identify optimal models. Any data drawn from a human source will reflect our biases and unconscious preferences. Just look at any social media channel and you will discover how irrational people really are. 

According to psychologists, around seven out of ten of the decisions we regularly make have no rational explanation. That is why we buy junk food or designer handbags - a complex mix of impulses overwhelms our rational objections that they are not good for our health or bank balance. 

So is the answer to allow AI to be more human by becoming less rational? It is those flaws that we always say makes a person human, after all. But that is not what the technology developers have in mind at all - they are just as obsessed with making AI the realm of pure reason as economists are with the idea that humans behave out of rational self-interest. 

As a result, AI is heading towards a “cliff of correctness”, demonstrating a purity of behaviour and single-mindedness of purpose that will make it as hard for humans to accept as those nearly-but-not-quite-right robots currently feel. Until controllable algorithms are developed for irrational human traits, such as a sense of comic timing or boasting, it seems likely that machines will be feared not just for their super-human strength, but also for their super-human willpower.

Please note that blogs are the sole view of the author and that they are not neccesarily the view of IQ ddg Ltd and should not be interpreted as advice. Please read our full disclaimer

Director of research and editor-in-chief, DataIQ
An expert commentator on all things data, David has been editor of DataIQ since its inception in 2011.

You have....



to be GDPR compliant.

Register with us for all the news

Sign-up to hear about the latest DataIQ news, content and events.