Have you spoken to a chatbot recently? Would you have even realised that your customer service query was being handled automatically? Or, looked at the other way round, have you wanted to interact with a human only to find there are none available?
In case you haven’t noticed, artificial intelligence (AI) is the flavour of the year. You can find it everywhere already - from those chatbots, without which men on dating sites would have nobody to talk to, through to new concept retail outlets where human interaction is rapidly becoming a thing of the past.
Except that many of the places where AI should be deployed have yet to be tackled. And much of what is being called AI isn’t anything of the sort. At a lunch this week with members of the DataIQ 100 and its sponsor, autoGraph, AI was the subject of discussion. In particular, whether it will bring brands closer to their customers or risk pushing them apart. Here are three of the main talking points which emerged:
Not everything automated is intelligent - it is easy to get drawn into the hype around a new technology or technique and start to use its language, only wrongly. Many of the examples of AI that have broken cover are actually something much simpler - algorithms. Over lunch, even the apparent complexity of these was busted by one seasoned practitioner who noted that, underneath the high-tech gleam, they are often not much different from macros written in Excel.
To understand where they diverge, consider the example of Crisis Text Line, a US-based support service (much like Samaritans in the UK) for individuals who are in a very dark place and are considering suicide. To ensure high-risk texts got rapid intervention, the organisation wrote an algorithm based on what its workers assumed were the 50 trigger words indicating that someone might be about to attempt to end their life.
When it turned AI loose on the database of 22 million texts, however, a complete different picture emerged which had not been coloured by human assumptions. The machine identified that use of the word “ibuprofen” was a much stronger indicator. That is machine-driven intelligence, rather than just automated human knowledge.
Where have all the humans gone? - new concept branches of Barclays have dispensed with the layout familiar to generations of banking customers. Gone are the queues and in their place are dedicated zones and staff equipped with iPads. It was noted by the 100 that this risks creating a culture shock for Brits who expect to join a queue (and for one Italian it would created confusion about which queue to jump…)
Online, there are no queues, of course, but also precious few humans. Chatbots are the first frontier of the AI-human interface as the machine edges ever closer to passing the Turing test (where a human is unable to determine during a conversation if the other party is a computer or not). They are also cheaper than customer service agents (even those based offshore), never need breaks or sick leave and don’t get grumpy.
Where a high proportion of interactions with an organisation are straightforward and transactional in nature, this looks like an obvious development. Whether it is a broadcaster’s recommendation engine or a retailer’s cross-promotion, the machine can sift the information, see patterns and make connections better.
But there is one major downside - friction-less relationships don’t stick. Without the emotional dimension of engaging with another human, however briefly, there is no difference between how one brand behaves compared to its rival, other than the slickness of its interface. As everybody levels up with automation and AI, not least by leveraging the growing number of shared engines out there, getting differentiation will become harder in the absence of a human face.
You break it, you own it - the moment of true AI breakthrough will come when machines are teaching machines. At that point, what is currently constrained to optimisation will start to drive innovation. The challenge is how to get human-powered organisations to allow that to happen when individuals no longer understand how the machine reached a decision.
Consider IBM Watson which demonstrated the potential of AI by winning US quiz show Jeopardy back in 2011. Since then, one of the strongest emerging use cases for that cognitive platform has been around healthcare, allowing Watson to digest all of the medical data available and come up with suggestions for treatment. As the 100 members noted over lunch, it still needs a human doctor to make the final choice.
One of the reasons for that is ethics. Humans absorb the differences between right and wrong from birth. Yet six years on from its quiz show triumph, it was only last week that IBM published an article based on interviews with 30 philosophers considering whether it needs to teach Watson ethics. Any six year-old knows that if what you are doing is hurting somebody, you should stop (and that is the basis of the Hyppocratic oath still).
The challenge AI raises is that of responsibility without understanding. If a driverless car has to be taught how to make life or death decisions about its passengers and passers-by, who is responsible for that teaching? In an era when the way those decisions get made by machines can no longer be explained to the humans who own them, one outcome might be a refusal to accept any responsibility.
The outcome is either a post-ethics world where machine actions no longer have human consequences. Or the failure of AI to break out of its human-supporting role and into its innovative potential. At the DataIQ 100 lunch, nobody felt up to resolving that dilemma. But somebody will have to - and soon.
Related articles: Robots march on, but most consumers want the human touch