“We are staring at a very, very exciting future. This is going to augment us as a species. This is our next big leap.” So says Lee Baker in relation to the potential enhancements automated decision-making can bring to the human race.
Baker, commercial director of open source machine learning platform Seldon, was speaking as part of a panel discussing big data and artificial intelligence in financial services at Barclays Rise innovation space. He was joined by Justin Lyon, CEO of Simudyne, and Mo Haghighi, head of developer ecosystems for UK and Ireland at IBM.
Lyon went a step further and said that, by relying on models to make our decisions, humans can get rid of biases such as racism in as little as 50 years. This will be possible because models can easily be trained not to produce racist or sexist results, he claimed, whereas it is not so easy with human beings. In his view, humans can hide their racism and it is really hard to change the way that someone thinks. On the other hand, it is really easy to change code.
“If you’ve trained the model on data that has implied racism in it, we will eventually be able to uncover it, address it and fix it. Over time, the decisions from that system will become better and better. With AI, we can potentially get rid of racism completely in 50 years. That’s why I can’t wait until almost all the decisions are made by the machines,” Lyon said.
“With AI we can potentially get rid of racism completely in 50 years."
He went on to add that, “simulation, artificial intelligence and machine learning is the way forward for our species to expose, identify and then remove [racism and sexism].” Haghighi agreed, saying that, “AI can help us to get rid of all those biases. That is definitely going to happen.”
The moderator broached the issue of implicit bias, asking how that problem could be solved. We live in a biased society, the data we collect is biased, and so we build models that reinforce that bias. “It is a major, major problem. The inherent data is biased and so reflects societal biases. How do we solve this?” he asked. Lyon denied this was the case: “I don’t agree that everyone is unintentionally doing this.”
"The inherent data is biased and so reflects societal biases. How do we solve this?”
While this may be true, there are several well-documented cases of implicit bias in training data causing algorithms to make bad decisions. Famously, the COMPAS recidivism prediction algorithm used in Florida was analysed for accuracy by ProPublica which found that it only correctly predicted reoffending 61% of the time.
Black defendants were frequently predicted to be at higher risk of reoffending and white defendants at a lower risk than they actually were. The scores allocated by the algorithm were used to determine whether the defendant should be detained or released before their trial and were often taken into account for sentencing decisions.
Former US attorney general Eric Holder said in 2014 of algorithmic risk assessments: “Although these measures were crafted with the best of intentions, I am concerned that they may inadvertently undermine our efforts to ensure individualised and equal justice.”
A study by Carnegie-Mellon University found that Google’s online advertising systems displayed ads for high-paying jobs more often if the user was thought to be a man. Surely no malicious intent there - Google is merely reflecting the status quo in which women take home 82% of men’s earnings in the US.
Image recognition software is renowned for having difficulty in recognising people of colour. A Taiwanese-American woman was continually asked by her camera if she was blinking. An HP computer tracked the face of a white woman, but not that of her black colleague. And the Google Photos app tagged black people as gorillas.
I would hazard a guess no-one hoped for these negative results, but the models did impact particular groups of people in an adverse manner, simply because the algorithms were not trained to recognise a diverse range of skin tones and eye shapes.
Baker did not agree with Lyon that removing humans from the decision-making process will make the world a more just place. He said: “The optimistic view of ‘I am just going to remove the human in the loop and things are going to get better’ is naïve.”
“The optimistic view of ‘I am just going to remove the human in the loop and things are going to get better’ is naïve.”
He gave the example of a training model created on behalf of an insurance provider that wanted to find out who were the best policy holders. The model concluded that people called Maureen were better policy holders that people called Dave. “Basically, the model used name as a proxy for gender. That’s problematic for a couple of reasons.” The first is because it is illegal to sell a woman a cheaper insurance policy than a man, and, secondly, it reinforces the bias in the training data that needs to be interrogated.
A “slipshod, nefarious actor” would be less worried about servicing and correcting for that bias. “My concern is that machine learning gets applied in a slipshod fashion that actually impacts the customer or the end actor in a negative fashion,” said Baker.
“Machine learning gets applied in a slipshod fashion that impacts the customer in a negative fashion.”
He said that bias in algorithms is one of the issues he is keen to see raised and, for this reason, he participates in the All Party Parliamentary Group on AI. He said: “We are ambassadors for AI and sometimes we need to call out our own community."
Baker repeated the unofficial tenet of computer and data science of "garbage in, garbage out". He said that, if bias goes in, then bias comes out and so, “if bias is inherent in your training data, then the model will only reinforce that. So we need now to establish checks and balances to make sure that is not happening.”
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024