Dr Nicolas Malleson is a professor of spatial science at the Centre for Spatial Analysis at the University of Leeds. He and his colleagues at the university have done a lot of work looking into artificial intelligence and data science in policing.
He began his presentation by giving a simple explanation of machine learning. “It’s just about learning patterns from data. It doesn’t have any other idea about what it is doing or why it is doing it. Typically machine learning that we mostly use in policing is not intelligent, which is probably quite a good thing,” he said.
To set the scene regarding machine learning in policing, Malleson said that evidence-based policing is “pretty well embedded”. This would be a step ahead of intuition-based policing, and if it can be taken further by applying machine learning and data science to evidence, it would deliver data-driven policing.
An example of data-driven policing is the prediction of crime locations. Malleson said that it has been well established that certain crimes cluster in space and time. This results in a phenomenon called ‘near-repeats’. “It’s when one crime happens and the risk for neighbours and the people surrounding that crime is even higher for a short time period and it decays over time,” he said.
Using the near-repeat hypothesis, police and researchers can try to predict when a subsequent crime will happen. A program that models this can highlight high-risk areas and show the police the areas in which to focus their resources.
Malleson pointed out another opportunity offered by algorithms in policing in that they can help officers to make decisions by processing much more data than a human ever could. This can help to speed up decisions. Another opportunity is the ability to assess possible harm from a phone call reporting a crime.
He did say however there are several issues with the algorithms that generate such predictions that can be solved but haven’t been dealt with in the best way so far.
One issue is imperfect and skewed input data. He said: “We know that police records aren’t a representative sample especially of demographic groups for whom reporting rates vary. So, any algorithms that we build on this data are going to share the same kinds of biases.”
Another issue is the feedback loop. “If you’ve got an area that has got a lot of crime, probably all of that crime data is going back into the algorithm. The algorithm is going to tell the police to go there. If you have police in the area, they’re likely to find crime there.”
Furthermore, algorithms used in policing are designed to produce false negatives, as there is a fear of declaring a person a low risk to society when they are actually high risk. There is also the problem of people assuming that a decision made by a machine is better than a decision made by their human selves, and deferring to the algorithm rather than their own judgement informed by their experience.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024