Recently, I attended an ‘Introduction to AI’ presentation by Dr John Holder of Realities Centre in a bid to expand my knowledge on the subject. Though familiar with terms such as neural networks and generative adversarial networks, I wanted to fill any gaps in my basic understanding.
Dr John Holder is the chief technologist at Realities Centre, an augmented and virtual reality consultancy, and has spent many years studying and teaching virtual reality. He has a close interest in futurology, or future studies, and decided to share some of what he knows.
This is what I learnt from his presentation, which covered key moments from the history of artificial intelligence and ended with thoughts for the future.
It began with a timeline, which was useful as I wasn’t able to take away the comprehensive one on display at the AI: More than Human exhibition at the Barbican Centre.
The first entry was the Mechanical Turk from 1717 – an elaborate hoax of a chess-playing machine which in fact hid a person in a compartment under the chessboard. The subsequent notable date is 1950 when Alan Turing developed the Turing Test, created to see if a machine was capable of replicating intelligent human behaviour. Five years later, computer scientist John McCarthy was credited with coining the term ‘artificial intelligence.’
In 1961, the first industrial robot was put to work on a General Motors assembly line which kept the human workers from doing dangerous tasks, and four years after that, Eliza the first chatbot began getting talkative. Twenty-two years ago in 1997, chess champion Garry Kasparov lost to the IBM chess-playing computer Deep Blue.
Just before the turn of the millennium in 1999, Kismet, the robot, developed at the Massachusetts Institute of Technology, attempted to show emotion. It had eyes, eyebrows, ears and lips to help it do so. A decade later, Google began a project for the development of self-driving technology. In 2011, IBM Watson beat humans at the game show Jeopardy and the same year, Apple released voice assistant Siri.
Five years after that, AlphaGo from Google’s DeepMind beat the world champion Lee Sedol at the ancient and intensely complex Chinese board game, Go. A year later in 2017, AlphaGo Zero started off with a neural network and used reinforcement learning to teach by playing Go matches against itself.
In addition, medical breakthroughs have been made that use AI to identify irregular heartbeats and cancerous tumours that are difficult for humans to detect.
With reference to the development in autonomous vehicles and where we stand in 2019, Holder explained that there are levels of vehicle autonomy from zero which indicates no automation to level 5 of complete automation. He said that right now we are at level four of high automation. But now we are facing the complex problems of relinquishing control and overriding. For example, if an autonomous vehicle is about to crash, should it relinquish control to the human driver?
"Technology with AI at its heart has the power to change the world."
Holder then showed a video from the Royal Society, titled ‘What is AI?’ which explained some of its use cases and potential pitfalls for society at large. It ended with the cautionary message: “Technology with AI at its heart has the power to change the world. The more of us that engage with shaping its development, the more chance we have to ensure a better fairer future with AI.”
www.youtube.com/watch?v=nASDYRkbQIY
This was very similar to Holder’s message as he said that the AI future we are heading towards is both a utopia and a dystopia.
He gave several examples of use cases for VR headsets that can be used for meditation, wellness, health and simulations of dangerous situations. Holder also explained a new term ‘XR,’ a blanket name that encapsulates mixed or merged reality, 360 video, AR and VR.
Previously, I was aware of two types of AI; artificial narrow intelligence (ANI) and artificial general intelligence (AGI). The former allows a machine to perform one task extremely well but the knowledge gained cannot be transferred to other problems. AGI, however, is more of a simulation of human intelligence and our ability to look at the bigger picture and apply a solution to one problem to an entirely different area. Holder added that there is a third type, artificial superintelligence or ASI, referring to a machine’s cognitive abilities far superior to those of a human.
"[The AI future] is scary and has a lot of great possibilities."
One particularly concerning issue that Holder highlighted was deepfakes, the superimposition of images onto a source image resulting in videos with people appearing to say things they haven’t.
Comedian and scriptwriter Jordan Peele showed an example of this by putting words into the mouth of former President Barack Obama.
www.youtube.com/watch?v=cQ54GDm1eL0
Earlier this year, OpenAI developed an AI model that can generate text. The creators of the model decided to delay the public release of the research for fear of misuse, such as the publication of fake news.
"Ethics has to be at the core."
As the world moves into unchartered territory of AI and limits are being pushed and tested, Holder called for ethical standards to guide AI developers to do the right thing. Elon Musk’s Nueralink start-up launched last month to create implants that connect human brains to computer interfaces via artificial intelligence. One day they could allow paralysed humans to control phones and computers.
There are now robots that can reduce the need for weed killer spray by microtargeting the pesky plants. Both of these applications could be great for humanity and the planet. But arguably they could both also be applied in nefarious ways.
Holder said that ultimately: “It’s scary and has a lot of great possibilities and ethics has to be at the core.”
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024