Symptoms and solutions to machine learning bias

Toni Sekinah, knowledge-based content manager, DataIQ

Professor Allison Gardner, head of foundation year science at Keele University, thinks that we need to press the pause button on the implementation of algorithms, especially within the public sector. This is because of the grave potential consequences. “We need to take a breath. If we do not direct and regulate AI properly it has the potential to further embed and enhance inequality and discrimination within society,” she said.

Symptoms of algorithmic bias

Woman excluded at workWhilst Gardner understands the good that AI can do in fields such as medicine with image recognition to detect cancerous growths, she also pointed out a number of instances in which machine learning can be very harmful.

She mentioned Amazon scrapping its AI recruitment tool when it was found to be biased against women, and Microsoft’s AI chatbot Tay which began to spew hate speech and so was its plug was pulled in under 24 hours. Gardner also referenced Joy Buolamwini who realised that facial recognition software is often not taught to recognise black faces. The Compas recidivism algorithm, used by some US judges to determine the likelihood of a defendant reoffending and influence bail decisions, was found to be biased against people of colour.

These are well-known examples but Gardner also mentioned the black university applicants in the UK were 21 times more likely to have their UCAS applications investigated for fraud.

“We need to be careful when we talk to the public about algorithms because there is evidence that they believe that algorithms are 100% accurate and trust them better than human. Often they aren't any better. They are just cheaper and quicker.”

As well as bias based on race, Gardner said that the use of an algorithm can be biased against poor people. She spoke of an open source algorithms developed by academics to determine whether a home visit should take place after an instance of child abuse has been reported, which could lead to a child being removed from the home.

“Rich people don't get reported. The evidence is, there is no class or ethnic difference as to who neglects and abuses their children,” she said, adding that it would disproportionately remove poorer children, as richer children are much less likely to be reported to authorities as being at risk of abuse or neglect.

“From these biased algorithms you do then get the discrimination coming out. When there is human decision making involved, that can inject bias at the start,” stated Gardner.

She said that this problem is down to a lack of diversity in the datasets and the people that develop them, because a lack of diversity among developers can lead to unconscious bias.

To solve this problem, the professor is calling for a change in the way that computer science students are recruited to university courses, and in the way that computer scientists are recruited by software companies. This is to ensure that the student bodies and workforces learning about and programming algorithms are representative of a society. She also believes there is a strong case for regulation.

“We also need to introduce regulation algorithmic impact assessments to ensure that we have diversity, that we will have citizens panels and algorithmic audits to make sure that we do not have these errors that we are seeing and do not embed inequality in our systems going forward.”

Professor Allison Gardner was speaking at The Ethics of Artificial Intelligence at The Microsoft Reactor London.

Knowledge-based content manager, DataIQ
Toni is the senior features editor responsible for the origination of DataIQ's interviews, articles and blogs.