“Architects learn laws, policy and how to work with governments when they are undergraduates. I think we need to be doing this.” So said Dr Joanna Bryson, a reader in artificial at the University of Bath, to a room of computer scientists. With a strong interest in AI, she sees clear parallels between AI architecture and physical architecture, as AI systems are also designed and have architecture. She stated that artificial intelligence is simply intelligence that has been deliberately built, much like a building is a structure that has been deliberately built, as opposed to a cave, which is a structure created by nature.
“We need to maybe license undergraduates like the architects get licensed and we maybe need to get some of our products inspected like buildings get inspected,” said Bryson. She stressed the importance of this by illustrating the calamitous approach to building construction before regulation was mandatory. Back then, anyone with enough money and land could erect a building, but eventually society decided that it wasn’t right that buildings were falling down on people. “AI products are now falling down on people, too, and affecting everyone,” she said.
In spite of Facebook’s dubious record on privacy, Bryson suggested that computer scientists can build safe AI by following one of the strategies of the ubiquitous social network. “They no longer have releases,” she said. “Basically, anyone in the company can edit any of the code base.” Bryson explained this means they have constantly-running computational systems that watch for the behaviours they don’t want to see and added that it is something that AI practitioners have done in the past.
"Since you know you are going to make errors, you can be checking for errors. Facebook can do it, so can we."
“Back in the 1990s in systems AI, we used to call this ‘cognisant errors’. You can’t always prevent errors, but since you know you are going to make errors, you can be checking for errors. Facebook can do it, so can we,” she said.
AI or data may come to be seen as similar to transport or electricity - a public good to which everybody should be connected. As such, small groups and minorities have to be adequately represented in the databases. When this doesn’t happen, bias can enter the systems. Recent research found that facial recognition technology is biased towards white men.
Bryson also suggested that data could be seen as being like the environmental eco-system. “There is only one world and we all have to share it, so maybe we need to think about data from a more international perspective,” she suggested. One of the changes that brings about is trans-national interdependence, however, certain countries have already made bold steps in regards to AI.
Bryson also referenced the recommendation of the European Parliament to the European Commission that it should consider a special status of e-person which might be useful to deal with problems of accountability for learned systems.
“When we put a human on the witness stand, we can tell if they are lying. AI doesn’t allow us to do that.”
The professor doesn’t think that should happen and is of the view that AI itself should never be held responsible. If responsibility or personhood were assigned to an artefact - something that has been built - it would enable powerful individuals or organisations to create the ultimate fall guy and avoid legal and tax liabilities. “Try suing a bankrupt robot,” said Bryson.
“When we put a human on the witness stand, we’re guessing whether or not they were diligent because we can tell if we think they are lying. AI doesn’t allow us to do that.” However, she said that AI does facilitate mandating transparency in a very honest accounting mechanism.
“We can go and look at the code, any logs, and the reliability. We can also just look at the outcomes, just as we do with humans when we audit them.”
Dr Joanna Bryson was speaking at BSC The Chartered Institute of IT.
Related articles: Stopping bias from creeping into AI