AI pioneer and recently turned "AI evangelist", Inma Martinez, is palpably excited about the future of artificial intelligence. Nevertheless, she is also aware of its shortcomings in terms of biases. “When you use deep learning, you are teaching a machine to achieve a task, not via a mathematical algorithm, but by thinking like a human brain,” she said.
Martinez said that this is considered "deep artificial intelligence" because it presents layers and layers of nodes that behave like brain neurons that are trying to learn how to resolve something.
She also said that, because this type of biological thinking is more abstract than binary, how the machine understands that it has successfully succeeded in achieving the task assigned will depend on who is coding the actual learning process for the machine.That is, who is able to code the “reward function” that confirms “a highly desirable state as an end goal”.
"You code how you understand the world to be. Your good may be my bad."
“You code how you understand the world to be. Maybe the way you understand things is completely polarised to someone else. Your good may be my bad,” she said. “Or your understanding of successful achievement may be of a lower quality control than mine," explaining that this is where bias can creep in.
Dr Joanna Bryson, senior research fellow at the University of Bath whose research interests include AI development methodologies, gave her own definition of the word bias in a recent presentation at The Chartered Institute for IT. .
“Bias is not a bad word. It just means regularities that you often detect.”
Bryson said: “Bias is not a bad word. It just means regularities that you often detect.” This was rephrased from a definition in one of her papers which stated, “biases are expectations derived from experienced regularities.” She illustrated her point with the example of knowing what "programmer" means, including that most are male.
She also outlined that there are at least three sources of bias in AI. The first is implicit, whereby bias is absorbed automatically by machine learning from ordinary culture. The second is accidental, where it is introduced through ignorance by insufficiently-diverse development teams. The third is deliberate, where bias is intentionally introduced as a part of the development process, whether through planning or implementation.
The source of bias that Martinez referred to, introduced by the coder’s view of the world, would count as accidental. However, she gave two real life examples of bias appearing in AI. In October 2017, an African-American woman living in China realised that social media app WeChat had translated a neutral term meaning "black foreigner" to the N-word in negative contexts. Fortunately, WeChat quickly rectified the error and apologised.
Martinez also referred to Google teaching its machine about human garments. “It was only shown men’s shoes and then, the first time the machine saw a high heel, it didn’t know what the hell it was,” she said.
Machines learning will automatically contain our implicit biases
Both these instances could be examples of bias being introduced implicitly, picked up automatically by machine learning, or accidentally, introduced by an unaware human. For one of her research papers Bryson and her colleagues did a test which proved that machines built with machine learning will automatically contain our implicit biases.
By counting billions of words on the web, Bryson and her colleagues were able to see that the machine picked up a bias that correlated words about flowers with pleasant terms and words about insects with unpleasant terms. This was a fairly innocuous situation. However, the educators also found that the machine more often associated a set of African-American names with unpleasant terms than a set of European-American names.
But what can be done to avoid these biases, whatever the source, creeping into AI? Martinez said that, because deep learning is in its infancy, there is no proven methodology as to what should be avoided when we notice we are going down a biased path.
Martinez, however, also predicted that there will have to be companies that specialise in auditing artificial intelligence architectures and deep learning functions and algorithms, just like EY does with book-keeping.
For Bryson, the problem lies in the difference between bias and stereotype, which is easy for a human to understand, but less so for a machine. In her paper, Bryson suggested that, “stereotypes are a subset of biases that our society has agreed should no longer exist,” and said there is no algorithmic way to discriminate stereotype from bias. However, she did come up with some solutions.
"We need tests, we need logging, and we need to iterate and improve.”
The lecturer said: “Maybe diversifying the workforce will help, but we’re all victims of our culture, so we need to work a little harder. We need tests, we need logging, and we need to iterate and improve.”
Bryson ended on a positive note by pointing out a very tangible benefit of AI in the HR and recruitment sector, where it is serving as a way to level the playing field. She said: “The HR people in large companies are able to find better people with AI that had been getting overlooked by their previous processes. They have said they are really happy about it."
Inma Martinez spoke to DataIQ at the everywoman in Tech Forum. Dr Joanna Bryson was speaking at the Chartered Institue for IT.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024