There is growing awareness of police use of facial recognition software. This is good. More awareness should lead to more debate and discussion about the appropriate use of this technology and the implications on privacy, data protection, civil liberties and to a degree, personal autonomy.
Recently, an office worker from Cardiff took South Wales police to court for the use of facial recognition technology, claiming his privacy was unlawfully violated. Ed Bridges told a three-day hearing at Cardiff civil justice and family centre that the use of the technology breaches data protection and equality laws and left him distressed. He explained in an opinion piece that without his consent or knowledge, his facial features had been scanned on two occasions – once on a high street and again at a protest - despite not being a suspect of a crime or on a criminal watchlist.
Bridges’ concerns are that there has been no public consultation or parliamentary debate about the rollout, no warning, automated facial recognition technology is proved to be inaccurate with certain demographics, and the police are violating our right to privacy. The police, government and technology companies cannot brush aside these issues.
Also last week, there was a surge in popularity of the story of a facial recognition technology trial by the Metropolitan Police in Romford at the beginning of the year. Once he was aware of the cameras, a passer-by covered his face and was promptly issued with a £90 fine for acting suspiciously. The story was initially only covered by one local newspaper, The Romford Recorder and one national paper but on 16th May other nationals ran the story, probably due to the surfacing of a BBC video of the incident. This video sparked a lot of debate on social media.
In the Romford case, the Metropolitan Police said that it informed passers-by informed of the facial recognition trial with large posters.
In the US, democratic Congresswoman Alexandria Ocasio-Cortez exposed the flaws of facial recognition technology and how it can exacerbate racial bias in the criminal justice system at a congressional hearing. With Joy Buolamwini, founder of the Algorithmic Justice League as an expert witness, it was explained that facial recognition algorithms are least effective on women and people of colour, and exclude people of different gender expressions.
Ocasio-Cortez asked: “So we have a technology that was created and designed on one demographic that is only most effective on that demographic, and they’re trying to sell it and impose it on the entirety of the country?”
The hearing took place on the same day that it was announced that Amazon shareholders chose not to ban use of its facial recognition technology by government and law enforcement. The ramifications of this are alarming.
Public consultation around facial recognition technology is overdue. And trust is priceless. Trust is built by openness, honesty, transparency and keeping one’s word. Police forces, governments and big tech companies are not bastions of these principles, but they could and should be.
It looks like public and political awareness is growing, but development and rollout of this technology is racing ahead. Debate, discussion and regulation that covers the use of this technology will need to catch up.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024