Technology | Europe
America Is AI-Controlling Its Own Justice System and It's Going Wrong — Three Cases That Prove It
From facial recognition arrests to predictive policing and AI-generated sentencing reports, American courts are trusting algorithms over people. Here is the specific failure pattern and three cases that define it.
From facial recognition arrests to predictive policing and AI-generated sentencing reports, American courts are trusting algorithms over people. Here is the specific failure pattern and three cases that define it.
- From facial recognition arrests to predictive policing and AI-generated sentencing reports, American courts are trusting algorithms over people.
- The NBC News report on a woman jailed by AI facial recognition error is one specific manifestation of a broader pattern that civil rights organizations, legal scholars, and affected communities have been documenting for...
- Three distinct categories of AI deployment in American criminal justice have produced documented serious errors.
From facial recognition arrests to predictive policing and AI-generated sentencing reports, American courts are trusting algorithms over people.
The Algorithm in the Courtroom
The NBC News report on a woman jailed by AI facial recognition error is one specific manifestation of a broader pattern that civil rights organizations, legal scholars, and affected communities have been documenting for years: the systematic integration of AI decision-support tools into American criminal justice at a pace whose specific speed has consistently exceeded the accountability frameworks whose development would make those tools safe to use.
Three distinct categories of AI deployment in American criminal justice have produced documented serious errors. Each involves a different specific technology, a different specific point in the justice system, and a different specific population of affected individuals — but they share the common thread of algorithmic outputs being given excessive weight relative to human judgment in contexts where the specific consequences of error are measured in lost freedom.
Facial recognition identification: The technology that put an innocent woman in jail for months operates by comparing query images against databases of known individuals, producing match scores whose interpretation is left to the specific investigator or prosecutor using the system. The specific error pattern — high false positive rates for dark-skinned women — has been documented in multiple studies since Joy Buolamwini and Timnit Gebru's groundbreaking 2018 Gender Shades research. Despite this documentation, 1,800+ law enforcement agencies currently use facial recognition technology, most without specific governance policies around evidentiary use.
Predictive Policing: The Technology That Determines Where Police Go
Predictive policing systems — AI tools that analyze historical crime data to generate risk scores for specific geographic areas or specific individuals — have been deployed by police departments in Los Angeles, Chicago, New Orleans, and dozens of other American cities. The specific LAPD ShotSpotter and PredPol deployments produced the particular documented outcome that critics anticipated: the specific "feedback loop" problem where directing more police resources to specific areas produces more arrests in those areas, which populates the crime database with more data about those areas, which the algorithm interprets as evidence of higher crime, which directs still more police resources there — regardless of whether actual crime rates in those areas are higher than other areas being comparatively under-policed.
New Orleans' specific PredPol deployment was terminated after a specific investigation found that the system was generating predictions based on crime data that systematically over-counted specific communities of color due to historically higher police presence rather than genuinely higher crime rates. The specific wrongful arrest pattern that followed from over-policing generated by the specific algorithm — which was itself generated by the specific bias in the historical data — is the particular feedback loop that the New Orleans case made concrete.
Sentencing Reports: The Black Box That Affects Prison Time
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) and similar tools are used in Wisconsin, Florida, Colorado, and other states to generate risk scores that judges receive before sentencing. A 2016 ProPublica investigation found that COMPAS assigned higher risk scores to Black defendants than to white defendants with identical criminal histories and offense profiles. The specific Wisconsin Supreme Court case — State v. Loomis — permitted COMPAS use in sentencing guidance but mandated that it not be the determinative factor — a limitation whose specific enforcement is difficult to monitor when risk scores are included in pre-sentence reports.
For the specific Americans affected: the particular combination of these three AI tools — deployed across identification, policing allocation, and sentencing — creates the specific technological overlay on a criminal justice system already documented to produce racially disparate outcomes. The specific question of whether AI tools are amplifying existing disparities or creating new ones is the empirical question whose answer requires the specific audit access and algorithmic transparency that most current deployments don't provide.