Search

An AI Taxonomy for Criminal Justice

Principled Use of AI in the Criminal Justice System

May 2026

Executive Summary

Artificial intelligence (AI) tools are supporting a growing range of activities in policing, courts, corrections, and community supervision, from facial recognition and automated police report writing to case scheduling, classification, and violence prediction. This report, produced by RAND, says that while these applications differ widely in their technical sophistication, intended use, and influence on human judgment, they are often discussed and governed as if they were a single category of technology.  

As a result, there is limited shared understanding of how different AI tools function, where they are deployed, and how their risks and benefits vary across the criminal justice system. Closing this gap in understanding is essential to help system leaders and policymakers evaluate whether specific AI tools are appropriate, effective, transparent, and equitableparticularly in decisions that affect public safety and individual rights.  

This report examines the scope and character of AI use in the criminal justice system by presenting a taxonomy of current and emerging AI applications and offering findings and recommendations for navigating risks, opportunities, and governance gaps. The taxonomy groups applications by the functions and decision points they support, illuminating uses across law enforcement, courts, corrections, and community supervision and reflecting how similar technologies may serve different purposes across settings. It is designed to support stakeholders responsible for evaluating, procuring, and implementing AI technologies, as well as state policy officials making technology decisions for their justice systems. 

This report supports the work of the Council on Criminal Justice Task Force on Artificial Intelligence, which was launched in June 2025 to develop principles, standards, and research guidance for AI use in the criminal justice system 

Key Findings

  • Risks for complex AI systems and algorithms are concentrated in high-stakes criminal justice functions. These include pretrial, sentencing, and enforcement decisions, where human judgment traditionally bears the greatest consequence. 
  • Risk of negative equity outcomes varies by data provenance and criminal justice function. AI applications relying on past criminal justice data tend to systematically reproduce racial and socioeconomic disparities.  
  • Use of AI in judicial and sentencing decision-making processes appears to be limited to date. Available research indicates that to protect due process, judges treat algorithmic recommendations as advice only rather than as binding decisions.  
  • AI adoption in administrative and training functions appears to be slower despite the relatively low risk to equity outcomes. AI tools for scheduling, record keeping, and other routine court and supervision activities operate in settings where the structural equity risk is low, yet there appears to be slow adoption of these applications. 
  • Oversight and transparency gaps are empirically documented across agencies. It is widely acknowledged that bias and a lack of transparency are among the biggest barriers to safe and equitable AI use in the criminal justice system.

Recommendations

  • Develop AI-related rules specific to the criminal justice system. Agencies should set clear standards that are accompanied by independent expert testing and validation, documented explainability of how systems operate, and public disclosure of basic information about tools that are being considered, piloted, or deployed. 
  • Prioritize safeguards over expansion in high-risk areas. Agencies should establish strong auditing and accountability systems to ensure proper human oversight. Past failures in these areas show that use should not grow until oversight is reliable and functional. 
  • Use AI for lowrisk administrative functions to improve efficiency, such as scheduling and document processing, when there is human oversight and established governance. 
  • Repurpose AI to improve transparency and fairness. AI tools can potentially be used to check for bias, verify the accuracy of evidence, and enhance accountability.  
  • Make AI literacy part of professional training, empowering criminal justice professionals to have the opportunity to learn about how AI works, the limits of AI technologies, and how to contest or verify AI results before relying on them in their everyday duties. 
  • Keep humans in charge of decisions about liberty. Governance frameworks should establish clear boundaries for algorithmic influence in domains that can have grave consequences for individual liberty. 

Taxonomy

KEY

  • Sector: P = Policing | Ct = Courts | Cr = Corrections | CS = Community Supervision 
  • Automation (Auto): FA = Fully Automated | HR = Human Review Required | DS = Decision Support Only  
  • Data Type (Data): Bio = Biometric | Beh = Behavioral | Geo = Geospatial | Com = Communications | Vid = Video | Str = Structured | Edu = Educational
  • Structural Equity Risk (Equity): H = High | M = Medium | L = Low  
  • Transparency Level (Transp): E = Explainable | P = Partially Explainable | B = Black Box 
  • AI Capability (AI Cap): NLP = Natural Language Processing | P/C = Prediction/Classification | Plan = Planning | CV = Computer Vision | GenAI = Generative AI | ES = Expert Systems

Recent Posts

AI

An AI Taxonomy for Criminal Justice

This RAND report, produced for the CCJ Task Force on AI, examines AI use in criminal justice, presenting a taxonomy of current and emerging applications and offering recommendations for managing risks, opportunities, and governance gaps.