Elif Tutuk believes that as the British public develops a greater understanding of artificial intelligence and its implications for society, those who are creating and programming AI need to have both a technology and human perspective to avoid blind spots.
Elif Tutuk, senior director of research, and her team at Qlik incubate new technologies including artificial intelligence and machine learning. As part of that, they have been looking at understanding human behaviour and the overall bias within the human brain that we introduce into every decision we make.
In her view, as humans have biases, from the very start so when we ask a question, we do so based on a hypothesis and the experiences that we have, which then makes the question biased.
"It is important for AI to have the capability to use a data structure."
She said: “That is why it is really important for the AI to have the capability to use a data structure that has the context built into analysis all of the data on behalf of the human, so that it can provide more objective outcomes.”
As well as introducing developers introducing it implicitly, bias can also arise from incomplete or inaccurate training data. For this reason, Tutuk also feels it is important to have diverse and inclusive teams when building machine learning algorithms. This is to make sure that there are many eyes with many perspectives looking out for blind spots.
She gave the example of the Amazon hiring algorithm as a case in which this did not happen. The AI was trained on data from historical hires and given that tech is a male-dominated field, the hiring algorithm saw male candidates as more likely to be successful purely because they were more men working in the company than women. There was a blind spot as the algorithm wasn’t trained to recognise female success.
"You must have diversity and inclusion across the teams."
“We have to approach it both from the human and technology perspective. On the human side, you need to make sure you have diversity and inclusion across the teams who are building machine learning algorithms so that everyone can take a look at the blind spots and make sure that the algorithms do not have an agenda,” she said.
This is an important issue for technologists to consider as the public is gainer greater awareness of artificial intelligence and its potential ramifications for society. In a survey commissioned by Qlik, 64% of British adults said that they do know that AI is a machine or computer that simulates human intelligence and not a physical robot. One quarter of the 2,005 people surveyed said they think AI is fundamentally a force for good while 13% said they think it is a force for evil.
The British public also has an understanding of bias within AI with 41% thinking that AI in its current state is biased and 38% blame this bias on inaccurate data. In addition to having diverse and inclusive teams in the room when the algorithms are being created, Tutuk also feels that technology has a part to play.
"To avoid biases, the data layer used by algorithms needs to change."
She said: “As part of the technology incubation we are doing, one of the things we are spinning out is how we can build the technology to avoid biases in the results that are being produced by AI. For that, I think the data layer that those algorithms are using needs to change. We think that the associative data technology that Qlik has brings a huge opportunity. Because for AI to learn it needs to have understanding and visibility of all of the data.”