Dr Bertie Müller, senior lecturer in computer science at Swansea University, has a keen interest in the propagation of bias in machine learning systems as well as technology ethics and artificial intelligence. His view is that because AI systems can be proactive, autonomous and can evolve in ways that haven’t been thought of, there need to be international initiatives to govern the data that goes into the systems.
There are several examples of how vulnerabilities in autonomous systems can lead to problems. In one instance, a stop sign defaced by stickers led an autonomous vehicle’s image recognition software to incorrectly perceive it as a 40mph sign instead. Another group of researchers introduced background noise when giving a command to voice recognition software in a smart home assistant. This dramatically altered what the software understood its instruction to be.
These examples could be seen as innocuous until we think of potential consequences if these systems were deployed and attacked on a large scale. Misidentification could lead to the wrong person being admitted to restricted premises if facial recognition is used as a security measure. In addition, autonomous cars could speed up when they are not supposed to and put its passengers, other road users and pedestrians at risk of injury or death.
Müller spoke of the need to make the public aware of these kinds of vulnerabilities. “It is no good just having experts know about these vulnerabilities. Everyone needs to know them because everyone uses these devices. They are ubiquitous and we need to know about this otherwise we can’t mitigate it,” he said.
One form of mitigation actually uses the concept of introduction of noise or interference mentioned above and is called differential privacy. Müller said “This is a way to deal with privacy by introducing some random data and thereby making it less likely for individuals to be reidentified through the data that is collected.”
The lecturer also referred to an approach developed at Cornell University called federated learning which allows AI researchers to gather data and store it locally rather than centrally. Then certain anonymised parts of the data that are required for a specific task are transmitted. “We need to address privacy and this is the way to do that,” he said.
He also called for some sort of certification to let users know if their data is being collected, what kind and for what purpose when they use a particular product or service.
For Müller, going forward artificial intelligence and autonomous systems should be reliable, responsible and resilient. He said: “We need to have systems that we know are reliable, that are built to withstand cyber threats and we need to have ways to respond to cyber-attacks, even on a smaller scale in our homes.”
An idea that Müller has been working on is that of dynamic governance or dynamic ethical approval of systems on a continuous basis. because he feels that an ethics committee ticking a box is an outdated form of governance for evolving autonomous AI systems.
He said: “We need to have something like an MOT of governance, of ethical approval that is repeated maybe once a year, or more often depending on how critical a system is, how sensitive the data is, how vulnerable the system is.”
Dr Bertie Müller was speaking at Digital Transformation Expo Europe.