W Power 2024

Need for responsible AI in policing and crime detection

Any advances in AI for surveillance and predictive policing need to be accompanied by the use of responsible AI to maximise the benefits and minimise the harms of the technology

Published: Jul 25, 2023 04:46:15 PM IST
Updated: Jul 25, 2023 04:46:58 PM IST

Need for responsible AI in policing and crime detectionAI has facilitated the creation and delivery of innovative police services, connected police forces to citizens, built trust and strengthened associations with communities. Image: Shutterstock

A few decades ago, artificial intelligence (AI) technology was little more than the subject of science fiction movies and novels. With technological advancement in the last 20 years, AI is already a part of various sectors such as transportation, finance, energy, healthcare, and so on. AI technology uses algorithms to analyse vast amounts of data to make business decisions. Through learning human behaviours, the software also develops the ability to mimic and eventually forecast future actions or predictions. With the growth in technology’s ability and accuracy, law enforcement agencies around the world are using AI technologies and solutions to combat crimes.

AI has facilitated the creation and delivery of innovative police services, connected police forces to citizens, built trust and strengthened associations with communities. Smart solutions such as biometrics, facial recognition, smart cameras, and video surveillance systems are growing use of AI.

According to a recent study by Deloitte, smart technologies such as AI could help cities reduce crime by 30 to 40 percent and reduce response times for emergency services by 20 to 35 percent.

AI systems are frequently used to analyse images, text and speech, assess risks or determine probabilities, generate content, optimise processes, and automate workflow.

Many countries worldwide are using facial recognition and biometrics, in-car and body cameras for police, drones and aerial surveillance, and crowdsourcing crime reporting and emergency applications to ensure public safety. License plate recognition systems are machine learning systems designed to identify numbers on plates and link with vehicle owners. Several law enforcement agencies around the world use facial recognition technology to support the identification of suspects, victims and missing persons, and even witnesses.

AI systems also enable large text and audio recordings data to be analysed to recognise, process, tag, and extract information from unstructured data sets. Law enforcement agencies are also exploring predicting crime using surveillance data to facilitate security. Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism to identify patterns, correlations, and trends. The cameras across different junctions and checkpoints capture images and data. These images are being analysed using AI to predict crime analyses relationships among different nodes in the data sets. In an undercover operation, law enforcement agencies use content generation systems to create a fake online persona to infiltrate a criminal network. This could also involve using a deep fake image as a profile picture and a text generation system to create profile data.

Also read: TRAI recommends framework to regulate AI development, statutory body to oversee it

While their potential is undeniable, AI systems have limitations and can have negative consequences. For example, a 2018 trial conducted by the London Metropolitan Police used facial recognition to identify 104 previously unknown people suspected of committing crimes. Only 2 out of 104 were accurate. From the moment a police officer wrongly identifies a suspect until the officer realises the error, significant coercive action can occur: the suspect can be arrested, brought to a police station, and detained. It can be terrifying, with irreversible consequences, including human rights violations. Face recognition systems have also demonstrated bias among people of colour. For example, facebook’s facial recognition algorithm labelled black people ‘primates’.

Law enforcement agencies must take a responsible approach to AI innovation throughout the AI life cycle and in a context where law enforcement agencies interact with AI solutions and models to maximise the benefits and minimise the risks associated with AI systems. Since AI models learning from vast amounts of data can take on certain human values present in the data used to train them. These data values may manifest in the model outcome resulting in biases and subjectivity.

As AI systems rely more on machine learning and deep learning models, potentially becoming more autonomous, the accountability gap for constitutional violations threatens to become broader and more profound. This also raises the question of who should be held accountable and responsible for incorrect outcomes affecting people’s lives.

Any advances in AI for surveillance and predictive policing need to be accompanied by the use of responsible AI, i.e., adhering to principles of good policing, following ethics and regulatory processes, and respecting human rights laws to maximise the benefits of AI technology and minimise the harms resulting from technology. Whether it is used for IOT devices such as sensors, CCTVs, and so on for digital contract tracing, it is essential to be sensitive to how people feel about data collection and data use. As the capabilities of artificial intelligence continue to grow, without a certain level of trust and acceptance, police services cannot utilise AI to its full potential in policing.

Dr Shruti Mantri, Associate Director, ISB Institute of Data Science (IIDS).

[This article has been reproduced with permission from the Indian School of Business, India]

Post Your Comment
Required
Required, will not be published
All comments are moderated