Recently, the Argentinian government announced the creation of the Unidad de Inteligencia Artificial Aplicada a la Seguridad (UIAAS), a unit designed to use artificial intelligence (AI) for the prevention, detection, investigation, and prosecution of crimes. The plan involves using machine-learning algorithms to analyze historical crime data and predict future felonies. It is also expected to deploy facial recognition software to identify “wanted persons”, monitor social media, and analyze real-time security camera footage to detect suspicious activities.
This development has sparked mixed reactions. Some worry it could pose a threat to citizens’ rights, while others applaud the potential to use technology to create a safer society. Even the most privacy-conscious individuals might be tempted by the promise of reducing violence and creating a safer environment. So, should we cheer for initiatives like this, or fear them?
This article will delve into whether AI can—and should—be used to predict crimes and, as a thought experiment, will explore how the European Union’s AI Regulation would address an organism like the UIAAS.
Can AI predict crimes?
Artificial intelligence refers to systems capable of learning from data, reasoning, and making autonomous decisions. These systems can generate predictions or recommendations, adapt over time, and operate independently of direct human control. For instance, music streaming services use AI to learn your music preferences and offer personalized recommendations based on past behavior. Similarly, AI can learn from millions of medical images and learn to identify signs of cancer, potentially even before doctors can.
Following these premises, AI could analyze millions of crime records to identify patterns that might help predict where crimes could happen or who is likely to commit them, allowing authorities to allocate policing resources more effectively.
But does this actually work in practice?
Even before the current AI boom, authorities worldwide have experimented with big data and machine learning (the backbones of AI) to support policing efforts. However, real-world deployment of crime-predicting AI systems has revealed several limitations:
- Bias in Data: Crime prediction models rely heavily on historical crime data, which may overrepresent minor offenses like vagrancy or loitering. This leads to a feedback loop where increased policing in impoverished areas results in more arrests, further inflating crime statistics in disadvantaged communities. In this way, the models risk criminalizing poverty rather than addressing serious crimes.
- Limited Scope: These models often overlook white-collar crimes due to insufficient training data. Wealthy individuals committing financial fraud or corruption often go under-policed, leaving systemic issues unaddressed while AI models focus on street-level offenses.
- Racial and Economic Disparities: In many countries, geographic location correlates strongly with race or ethnicity (that is, members of the same racial or ethnic group tend to be over represented in certain neighborhoods). Even if AI is trained not to consider race directly to avoid discrimination, its reliance on geographic data may serve as a proxy for race, reinforcing racial inequalities in policing. This indirect bias further entrenches disparities in the criminal justice system.
- Flawed Objectives: Crime prediction models tend to be designed to maximize arrests, not to build trust or improve community relationships. This punitive approach undermines efforts to create more positive policing strategies that could strengthen ties between law enforcement and the communities they serve.
- Ineffectiveness of Existing Tools: One of the most widely-used crime prediction software systems, PredPol, has been found to be surprisingly ineffective. A 2023 study revealed that PredPol’s predictions were accurate less than 1% of the time, casting serious doubt on the overall reliability of AI-driven crime forecasting.
Moreover, for an AI model to be truly effective, it must learn from its mistakes. Ideally, if an individual is predicted to commit a crime but never does, the system would adjust its reasoning to avoid similar false positives in the future. However, law enforcement agencies typically do not have data on individuals who do not commit crimes. And even if such data were available, maintaining a dynamic, self-correcting AI model would be prohibitively complex and expensive.
For those interested in a deeper exploration of the societal risks posed by crime-predicting algorithms, Cathy O’Neil’s book Weapons of Math Destruction offers valuable insights into the unintended consequences of these technologies.
Thought experiment: Applying the EU AI Act
Although Argentina is not subject to the European Union’s regulations, let’s consider how the Unidad de Inteligencia Artificial Aplicada a la Seguridad (UIAAS) might fare under the EU’s AI Act, a regulation designed to govern the ethical and safe deployment of AI systems.
According to Argentina’s Resolución 710/2024, the UIAAS will perform several key functions, including:
- Using facial recognition to identify wanted individuals in real-time
- Analyzing security camera footage to detect suspicious activities
- Using machine learning to predict future crimes based on historical data
- Processing large datasets to create suspect profiles and establish links between cases
- Deploying drones for aerial surveillance and emergency response
Under the EU AI Act, many of these practices could fall under the category of prohibited AI uses. Article 5 of the Act specifically prohibits the use of real-time biometric identification, such as facial recognition, in public spaces (with very limited exceptions). Predictive policing based on profiling is also tightly restricted due to concerns about privacy, discrimination, and civil liberties.
Furthermore, even the non-prohibited parts of the system would face challenges due to being classified as high-risk AI systems according to Annex III of the AI Act. For example, the use of AI for law enforcement purposes or remote biometric identification would be subject to strict rules that require comprehensive risk management systems, proper human oversight, and transparency requirements. Additionally, the system would need to be registered in the EU’s high-risk AI database.
In addition, depending on the specific use of the tool, other concerns could arise. For example, the UIAAS will be responsible for analyzing social media activity to identify potential threats. In a worst-case scenario, this could quickly evolve into the persecution of political opposition and violate democratic values.
In short, many of the UIAAS’s core functions would be deemed illegal and the rest would require very strict compliance efforts under the EU AI Act.
Conclusion
While AI holds great potential to transform many sectors, its application in law enforcement—an area with high stakes and ethical complexities—comes in several shades of gray. The use of AI in crime prediction can be tainted by including bias in data, limited scope, and the risk of reinforcing existing racial and economic inequalities. Moreover, the effectiveness of these tools is far from guaranteed.
From a regulatory perspective, the fact that such a system would face significant limitations under the EU AI Act highlights the role of this legislation in safeguarding civil liberties and preventing misuse, compared to other legal systems around the world.
Ultimately, using AI to predict crimes raises questions about the balance between public safety and civil rights. While the prospect of safer communities is tempting, it is important to weigh carefully the potential benefits against the considerable risks.