Dr Nicolas Malleson is a professor of spatial science at the Centre for Spatial Analysis at the University of Leeds. He and his colleagues at the university have done a lot of work looking into artificial intelligence and data science in policing.
He began his presentation by giving a simple explanation of machine learning. “It’s just about learning patterns from data. It doesn’t have any other idea about what it is doing or why it is doing it. Typically machine learning that we mostly use in policing is not intelligent, which is probably quite a good thing,” he said.
To set the scene regarding machine learning in policing, Malleson said that evidence-based policing is “pretty well embedded”. This would be a step ahead of intuition-based policing, and if it can be taken further by applying machine learning and data science to evidence, it would deliver data-driven policing.
An example of data-driven policing is the prediction of crime locations. Malleson said that it has been well established that certain crimes cluster in space and time. This results in a phenomenon called ‘near-repeats’. “It’s when one crime happens and the risk for neighbours and the people surrounding that crime is even higher for a short time period and it decays over time,” he said.
Using the near-repeat hypothesis, police and researchers can try to predict when a subsequent crime will happen. A program that models this can highlight high-risk areas and show the police the areas in which to focus their resources.
Malleson pointed out another opportunity offered by algorithms in policing in that they can help officers to make decisions by processing much more data than a human ever could. This can help to speed up decisions. Another opportunity is the ability to assess possible harm from a phone call reporting a crime.
He did say however there are several issues with the algorithms that generate such predictions that can be solved but haven’t been dealt with in the best way so far.
One issue is imperfect and skewed input data. He said: “We know that police records aren’t a representative sample especially of demographic groups for whom reporting rates vary. So, any algorithms that we build on this data are going to share the same kinds of biases.”
Another issue is the feedback loop. “If you’ve got an area that has got a lot of crime, probably all of that crime data is going back into the algorithm. The algorithm is going to tell the police to go there. If you have police in the area, they’re likely to find crime there.”
Furthermore, algorithms used in policing are designed to produce false negatives, as there is a fear of declaring a person a low risk to society when they are actually high risk. There is also the problem of people assuming that a decision made by a machine is better than a decision made by their human selves, and deferring to the algorithm rather than their own judgement informed by their experience.
Accountability is an issue that leads on from the previous one. Malleson posed the question, can police officers be accountable for a decision if they are just doing what the computer tells them to? This problem is exacerbated when algorithms are opaque black boxes.
Malleson said: “Even with an open source machine learning implementation, things like neural networks are very difficult to unpick. It is able to make predictions for you but it is very difficult to know why it is coming up with those predictions. Even if you can see the source code, you might not be able to explain to someone why it is telling you what it is.”
Linked to that is the issue of transparency. “Many public and private sector implementations are typically closed source because they belong to companies so they are commercial secrets. If you can’t unpick a lot of these algorithms, how can we scrutinise the decision making in the same way?”
In the midst of all these concerning issues, Malleson thinks that changes can be made to improve the situation. If biases are understood then they can be mitigated against. If more policing algorithms are open source, police forces would be able to test them on their own data. Police also need a better understanding of the algorithms. This would enable them to recognise their weakness and uncertainties, and therefore identify what is a good prediction and what isn’t.
Dr Nicolas Malleson was speaking at Big Data, AI and the future of crime and justice hosted by the University of East London in association with the British Society of Criminology, Crime and Justice Statistics Network.