Alejandro Saucedo, chief scientist at the Institute for Ethical AI and Machine Learning, believes that standards need to be brought in to ensure the responsible development of machine learning. He gives his view as to why bias cannot be completely eradicated and what we should try to do instead.
The Institute for Ethical AI and Machine Learning is a UK-based research centre, undertaking research into responsible machine learning systems. The need for more responsible AI is pressing as new examples of machine learning systems and algorithms unfairly disadvantaging certain groups of society are appearing regularly.
In 2016, a beauty pageant, in which the contestants were judged by robots, resulted in only people with lighter skin tones being deemed beautiful. In the last few days, it was found that full-body scanning machines used by the Transport Security Administration in the US are more likely to alarm when they encounter black females.
According to Saucedo, following best practice can be a way to avoid unwittingly introducing harmful bias to algorithms. He said: “If you don’t follow best practice, you are going to end up having these biases, not only hurting your business but also potentially causing disadvantage to society.”
He feels there are several reasons as to why AI-based decision making can lead to negative outcomes for certain sections of society. One concern he has is that the term ‘artificial intelligence’ is overused. This is a problem as it means that the people deploying it may not have a comprehensive understanding of what they are doing.
He said: “We’re in an age where people want to introduce machine learning because it is a bit of a buzzword. I think most of them are just digitising their systems, whether that is putting it in the cloud or introducing some kind of automation.”
As a result, some companies may be bringing in automation because they are trying to ’keep up with the Joneses’, in a business capacity, without fully knowing what they are doing or how they should do it.
This is lack of comprehension and lack of recognition for the need for comprehension needs to be tackled, said Saucedo. He said: “We need to convince people that they need to understand what they are doing. It’s crazy that people are literally just pushing stuff because they read a tutorial on Medium.”
There is also the problem of profits coming before people, with businesses that do understand AI, deploying it with the sole purpose of making or saving more money.
And there is the unavoidable issue of bias being introduced to algorithms and machine learning processes from the omnipresent bias all around us. He said: “It is impossible to remove bias from algorithms [although] it is possible to remove a reasonable level of undesired biases.”
He said that while bias is inevitable, it can be exacerbated by “class imbalance” in the training data where there are more examples of one thing than another. This class imbalance can be caused by ‘sample bias’ where the data used to train the model doesn’t reflect the environment it will operate in.
To mitigate these issues, Saucedo and his team at the Institute are working towards building industry standards. He said: “We’re currently contributing to a project with the IEEE on an algorithmic bias consideration standard.” He went on to say: “One line of code can have a significant effect. Standards and best practices are unsexy and don’t sound cool but that’s what we need and what we need to introduce.” He did, however, acknowledge that nobody is going to use standards unless it is beneficial to them.
Research centres, businesses and society at large will have to think about how to incentivise the use of standards for fairer AI all round.
Alejandro Saucedo was speaking on a panel at WBIDinners: Investment and algorithms: Tackling diversity and bias challenges in business.