Attending any event with a focus on artificial intelligence these days involves keeping two emotions in balance - excitement about the potential of the AI-powered future and a more primal dread that it will make you redundant by taking over your job. Until recently, journalism hadn’t figured on the AI hit list - a bigger concern has been around how publishers can find sustainable revenues in the face of Google’s increasing dominance.
But the moment finally came on 1st June at the first European summit organised by the Language Big Data Alliance (LBDA), a global non-profit organisation working to ensure AI and big data have a positive impact on education and research, with the support of GTCOM. During a closing panel session, Stuart Petersen, chief strategy officer at Arria NLG, revealed that his company’s platform is already producing algorithmically-driven news stories for local and national media. The only solace was that journalists are involved with helping to create and tune those algorithms (at least initially).
“We have moved from the idea of true or false to an era of news, alternative news and mirror news.”
This followed from the already-troubling insight provided by Eric Yu, CEO of GTCOM, a big data and artificial intelligence company which has the slogan, "language connects the world, data inspires the future”. The company has developed machine translation, speech recognition, image recognition, semantic search, knowledge graph, as well as big data analysis and visualisation for its YeeSight big data eco-system on which the vertical application JoveEye has been built for education and research purposes.
According to Yu, “we have moved from the idea of true or false to an era of news, alternative news and mirror news.” The reason why this is concerning, especially to a professional journalist, is because of the impact which this post-truth mindset can have.
Yu gave a prime example: “Tencent saw a RMB100 billion drop in its share price on the basis of two articles criticising its top-selling game, Honour of Kings, because too many young people were playing it.” Shares fell by 5.1%, even thought Tencent had already announced plans to curb access to the game among younger consumers.
“We could draw a new hype cycle. Historical fact will tell us the future.”
What is notable, however, is that GTCOM’s YeeSight platform, which ingests vast amounts of data including social media feeds, predicted the event two days before it happened. To Yu, this reveals an important opportunity for researchers, decision-makers and investors. “What if we could expand the data sources being considered for the hype cycle? The shape of that curve has been the same for 20 years. If we look at who is talking about technology, new products and how consumers are engaging with them, if we aggregate that, we could draw a new hype cycle. Historical fact will tell us the future.”
That is a good example of why the ability to translate, interpret and analyse sources across multiple language groups is so important, which is the core mission of the LBDA. Ai and machine learning are likely to play a useful and value-adding role here, as Dr Khalid Choukri, general secretary of ELRA, a not-for-profit language resource organisation, explained. “Human transcription of one hour of video takes between three and 50 hours. There are 400,000 translators globally, 150,000 of them in Europe, and the need is growing by 30% year-on-year, yet only 10% of data gets translated.”
Right at the start of the conference, Prof Hannelore Lee-Jahnke, co-chairman of the LBDA, gave another beneficial example. “Applied linguists at the University of Zurich are working to detect the differences between the different Austrian and German accents, which is very important in security.”
Her own job fear relates to what she described as “the loss of theory”, a concern first raised in 2008 by Chris Anderson. This reverses the classic theory-hypothesis-proof cycle of scientific research by starting with data and then seeing what stable models might shake out.
Lee-Jahnke argued that this fear is over-stated and gave a strong example for why. “Einstein in 1905 published his Theory of Relativity from relatively little data about the universe. Now we know a lot more and have validated his theory from big data,” she said.
While it is easy to assume that concerns about AI are more rife among older workers where age plays as much of a factor in whether an employee is valued as technology and the skills to use it do. But according to Ben Page, CEO of Ipsos MORI, that is not necessarily true. “Fear that technological progress is changing things for the worse is at 53% among Millennials compared to 46% among Baby Boomers. Paradoxically, both perceive that only technology can solve problems and make things better.”
This is a good example of what he termed “cognitive polyphasia” where individuals draw on multiple sources of knowledge which may even be in conflict (or what George Orwell called “double-think”). He gave the example that 72% of consumer globally are anxious about their privacy and say they do not trust organisations with their personal data, yet their adoption and usage of social media seems unaffected by this concern.
Of the 1,000 leading companies globally in 1912, only four still exist.
This may be one reason for the “rise and rise of tradition with institutions proving to be more resilient than we might think”. Technology certainly threatens jobs and entire companies - of the 1,000 leading companies globally in 1912, only four still exist, for example - but it also makes people yearn for the tried-and-tested, even if it is potentially dangerous, such as half of European consumers yearning for a strong leader.
Yet this does not extend to experts. If you thought Michael Gove was an outlier for saying during the Brexit referendum campaign that, “Britons have had enough of experts”, Page offers the finding that 69% of consumers say experts don’t understand their lives, leading to what he described as “the crisis of the elites”. Elsewhere, Peter Turchin has drawn a link between “elite overproduction” in which there are too many hyper-rich individuals competing for too few positions of power leading to political turmoil.
"With AI-based, gamified learning, if somebody fails a level, they are motivated to start over.”
For a burst of Millennial positivity, the event did offer up Farhia Khan, data and AI technology specialist at Microsoft, who led with the view that, “AI and education have the power to elevate”, provided traditional models adapt. “They had a negative impact on the mindset of students, for example, if they got a bad grade they would feel bad. With AI-based, gamified learning, if somebody fails a level, they are motivated to start over and go up a level.”
That may in turn have consequences for human evolution. According to Khan, “the new generation is ‘phygital’ - it is unable to draw a distinction between the physical and digital worlds.” Necessary as such an outlook might be for the AI-powered future, it is hard not to worry that there could be negative consequences, whatever age you are.
For now, we all need to practice acceptance that technology is developing fast and having an impact on our world which may well be as much negative as it is positive. To ensure it stays on the right side of the balance sheet for humanity will require strong leadership and government regulation, even if we no longer believe that requires specialist skills. The more you are able to think like Donald Trump, in other words, the more likely you are to thrive. Just losing your job doesn’t seem such a big deal in that perspective…