Alejandro Saucedo, chief scientist at the Institute for Ethical AI and Machine Learning, believes that standards need to be brought in to ensure the responsible development of machine learning. He gives his view as to why bias cannot be completely eradicated and what we should try to do instead.
The Institute for Ethical AI and Machine Learning is a UK-based research centre, undertaking research into responsible machine learning systems. The need for more responsible AI is pressing as new examples of machine learning systems and algorithms unfairly disadvantaging certain groups of society are appearing regularly.
In 2016, a beauty pageant, in which the contestants were judged by robots, resulted in only people with lighter skin tones being deemed beautiful. In the last few days, it was found that full-body scanning machines used by the Transport Security Administration in the US are more likely to alarm when they encounter black females.
According to Saucedo, following best practice can be a way to avoid unwittingly introducing harmful bias to algorithms. He said: “If you don’t follow best practice, you are going to end up having these biases, not only hurting your business but also potentially causing disadvantage to society.”
He feels there are several reasons as to why AI-based decision making can lead to negative outcomes for certain sections of society. One concern he has is that the term ‘artificial intelligence’ is overused. This is a problem as it means that the people deploying it may not have a comprehensive understanding of what they are doing.
He said: “We’re in an age where people want to introduce machine learning because it is a bit of a buzzword. I think most of them are just digitising their systems, whether that is putting it in the cloud or introducing some kind of automation.”
As a result, some companies may be bringing in automation because they are trying to ’keep up with the Joneses’, in a business capacity, without fully knowing what they are doing or how they should do it.
This is lack of comprehension and lack of recognition for the need for comprehension needs to be tackled, said Saucedo. He said: “We need to convince people that they need to understand what they are doing. It’s crazy that people are literally just pushing stuff because they read a tutorial on Medium.”
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024