Kasia Borowska is the co-founder of Brainpool, an artificial intelligence services consultancy that bridges the gap between corporations and cutting edge, academic research on AI. With a background in mathematics and cognitive science, she is now in close contact with private entities that use AI.
She thinks that employing academically trained data scientists with a thorough understanding of automation technology will help to increase the creation and deployment of artificial intelligence with minimal negative bias.
According Borowska, there are several problems associated with bias in artificial intelligence. The first is that bias can be such a pervasive aspect of it because it creeps very early in the process. She said: “Bias often comes out from the pure fact that the data set that you are analysing does not represent the actual audience about whom you are trying to draw conclusions.”
She gave the example of artificial intelligence used in facial recognition being much accurate with the detection of white faces. This is due to much of the training data containing far fewer images of faces of people of colour than white faces.
Another problem with bias in artificial intelligence is the blind faith that is often placed in AI and the expectation that algorithms will come to neutral objective decisions. Furthermore, it is hard to deconstruct those algorithms to see how exactly problematic decisions are being made.
She said: “You can’t just trust algorithms especially when you don’t fully understand how they work. A lot of algorithms are acting like black box solutions and no one actually fully understands what is happening inside.”
It is likely that this blind faith comes from a lack of understanding of the mechanics of machine learning, with many companies deploying ready-made solutions with the help of self-taught data scientists.
“There are so many off-the-shelf, open source solutions at the moment that any of us could use just following some simple instructions after a couple of days’ training. But unless you really understand the background of what you are trying to build, you will not be able to build a sustainable AI solution,” Borowska said.
A further concerning issue is the concentration of so much understanding about AI among very few actors, which could result in the exclusion of the ‘human in the loop’. She explained this by saying: “It is really dangerous when one country or bigger companies take ownership of the whole AI landscape because once you get advantage in AI development, that advantage becomes exponential over time, to the level that humans won’t be able to interfere with it at some point.”
Brainpool, the organisation she co-founded, is aiming to tackle this lack of understanding by sending teams of AI researchers and industry professionals into companies, and through those teams, passing on the latest research about artificial intelligence.
In her words: “We’ve built Brainpool with the idea that AI solutions, especially cutting edge research, will be available to everybody. We are a network of 300 AI experts in machine learning, data science and computer science, giving access to top-level expertise and solutions to corporations in the form of consulting or access to our experts on a project basis.”
Borowska said that she and her colleagues will always encourage clients to allocate sufficient time and resources to ensure that the AI they build is “ethically correct.” This involves pre-processing the data by looking at it from as many angles as possible, and then employing data scientists and researchers with an academic background who “can understand the mathematics behind the machine learning code." As she said: "It is really important you understand what the machine learning is doing.”
Additionally, the people creating AI need to make sure that an algorithm is trained on data that contains a representative sample of the people or things that it will be making real life decisions about. And furthermore, there needs to understanding of our own unconscious biases and a decrease in the reliance on machine-made decisions.
Borowska said: “Every one of us is biased and every thing we look at, we look at differently from everyone else. So knowing and remembering that and making sure that your biases don’t come across when building a machine learning system is really important. Every decision has to be made with a combination of data-driven decisions and human intuition.”
Essentially, with artificial intelligence, we need to keep an expert human in the loop.
Kasia Borowska was speaking on a panel at WBIDinners: Investment and algorithms: Tackling diversity and bias challenges in business.