Kasia Borowska is the co-founder of Brainpool, an artificial intelligence services consultancy that bridges the gap between corporations and cutting edge, academic research on AI. With a background in mathematics and cognitive science, she is now in close contact with private entities that use AI.
She thinks that employing academically trained data scientists with a thorough understanding of automation technology will help to increase the creation and deployment of artificial intelligence with minimal negative bias.
According Borowska, there are several problems associated with bias in artificial intelligence. The first is that bias can be such a pervasive aspect of it because it creeps very early in the process. She said: “Bias often comes out from the pure fact that the data set that you are analysing does not represent the actual audience about whom you are trying to draw conclusions.”
She gave the example of artificial intelligence used in facial recognition being much accurate with the detection of white faces. This is due to much of the training data containing far fewer images of faces of people of colour than white faces.
Another problem with bias in artificial intelligence is the blind faith that is often placed in AI and the expectation that algorithms will come to neutral objective decisions. Furthermore, it is hard to deconstruct those algorithms to see how exactly problematic decisions are being made.
She said: “You can’t just trust algorithms especially when you don’t fully understand how they work. A lot of algorithms are acting like black box solutions and no one actually fully understands what is happening inside.”
It is likely that this blind faith comes from a lack of understanding of the mechanics of machine learning, with many companies deploying ready-made solutions with the help of self-taught data scientists.
“There are so many off-the-shelf, open source solutions at the moment that any of us could use just following some simple instructions after a couple of days’ training. But unless you really understand the background of what you are trying to build, you will not be able to build a sustainable AI solution,” Borowska said.
A further concerning issue is the concentration of so much understanding about AI among very few actors, which could result in the exclusion of the ‘human in the loop’. She explained this by saying: “It is really dangerous when one country or bigger companies take ownership of the whole AI landscape because once you get advantage in AI development, that advantage becomes exponential over time, to the level that humans won’t be able to interfere with it at some point.”
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024