The Problems With Using Artificial Intelligence And Facial Recognition In Policing

Recently, I’ve been reading about the effectiveness of predictive policing, which can be used to prevent crime and terrorism.

It seems only a matter of time until we employ artificial intelligence in this way on a larger scale. Funding cuts in the UK have meant that over 7,000 neighborhood officers have lost their jobs in three years, putting the public at risk.

As alternative options, some have considered private security or pooling resources for an organized neighborhood crime watch. Another alternative – already utilized in the US – is using predictive policing systems to reduce street crimes.

Predictive policing uses data to forecast areas where crime will happen, by mapping ‘hot spots’.

More interestingly, it can also score and flag people most likely to be involved in violence. Early evidence from David Robinson and Logan Koepke from Upturn studied ten vendors of predictive policing systems, to find that software programs were inputting social media, connections and relationships, social events and school schedules, and commercially available data from data brokers into systems to predict crime.

As well as mapping out possible criminal hotspots, software could also assign a numerical threat score and a color coded threat level (red, yellow, or green) to any person that a police department searched for.

The way these tools make predictions, and police departments actually use these systems, is not transparent.

The authors found predictive policing was used in 35 locations in the United States. The Chicago police department, for example, began using its subject list in 2013: it is the most prominent example of a person-based policing system known to the public to date.

Such systems can also be applied by the police when it comes to preventing terrorism. Existing human systems are so overburdened that errors can lead to grave consequences.

The country’s most senior counter-terrorism officer, Neil Basu, recently stated that police forces are no match for the threat of Islamist and extreme far-right terrorism: currently, there are 700live terrorism investigations.

The UK is also home to over 23,000 jihadists on a watch list. A review following the Manchester bombing by David Anderson QC, illustrated that intelligence about suicide bomber Salman Abedi before he struck was misinterpreted, thwarting the opportunity to prevent the attack. Artificial intelligence systems may therefore provide much needed assistance in monitoring terrorists.

In the context of white collar crime, companies are already creating software to predict the ‘typical’ face of a white collar financial criminal. Researchers can therefore apply machine learning techniques to quantify the ‘criminality’ of an individual.

Doing so in the terrorism space for aiding arrests, however, would be problematic.

Concerns have been voiced by many that that stop and search powers are already used unfairly against those who look visibly Muslim. Others have argued that artificial intelligence is likely to reduce bias, as police and judges tend to arrest and sentence according to preconceived notions.

Popular

More Articles

Popular