Search

National Task Force Releases New Framework to Help Criminal Justice Agencies Assess AI Tools

The framework outlines key questions to help law enforcement, courts, corrections, and other stakeholders effectively and responsibly evaluate and implement AI

EMBARGOED FOR RELEASE
March 31, 2026
Contact: Brian Edsall
bedsall@counciloncj.org
845-521-9810

WASHINGTON — As criminal justice agencies and practitioners face urgent questions about how to effectively and safely adopt artificial intelligence (AI) tools, the Council on Criminal Justice (CCJ) Task Force on Artificial Intelligence today released a new decision guide that provides a detailed and structured pathway to help stakeholders evaluate, implement, and oversee AI in the criminal justice system. 

Law enforcement, courts, and corrections agencies are already deploying AI applications, ranging from facial recognition and automated police report writing tools to case scheduling, classification, and violence prediction. When used carefully, these technologies can increase efficiencies and enhance safety and justice, the Task Force said. But it cautioned that without clear guardrails and guidance, AI systems can also amplify biases, threaten due process, and erode democratic accountability. 

This framework addresses those challenges by providing specific, detailed steps agencies should take as they decide whether and how to implement AI tools. 

“Thousands of criminal justice agencies are looking at how AI can improve their performance, but most are flying blind. There’s no objective guidance about how to maximize the potential benefits and minimize the possible harms,” said Task Force Chair Nathan Hecht, former chief justice of the Texas Supreme Court. “That’s what we’ve created here. Every police chief and sheriff, every corrections director, and every court administrator should apply this framework so they are asking the right questions and figuring out how to adopt AI tools in the most effective and responsible ways.” 

A Step-by-Step Process to Evaluate AI Tools 

At the core of the framework is a structured classification process designed to help stakeholders make thoughtful decisions about AI tools before and after they are adopted. The process begins by assessing whether an agency has enough staff capacity and technical expertise to deploy AI tools. Technologies are then evaluated and categorized based on their level of risk to due process, civil rights and liberties, or other concerns and on their potential to enhance efficiency, crime control and prevention efforts, and other benefits.  

These classifications determine how agencies should proceed. Lower-risk tools may move forward with an agency’s standard deployment process; higher-risk systems require more careful implementation and specific, enhanced safeguards. In some cases, agencies may need to perform further evaluation before proceeding—or avoid the system entirely. 

 The report includes 10 detailed, practical tools and guidance for implementation. The resources include readiness assessments, protocols for identifying prohibited systems, evaluations of system complexity, and sector-specific guidance. The tools also offer fillable templates and checklists for procurement and classification, as well as step-by-step support for implementation planning and ongoing monitoring. 

“Choosing AI tools is only part of the equation. How they are used is what determines whether they succeed,” said Task Force Director Jesse Rothman. “This framework gives criminal justice stakeholders both the blueprint and instruction manual to deploy AI in ways that enhance safety while safeguarding constitutional rights and due process.” 

The Task Force stressed that while the framework is detailed enough to serve as an action plan, it is not meant to be rigid. Agencies and organizations will need to adapt its recommendations to their own unique needs, capacities, and circumstances.  

Key Questions for the Future of AI Governance 

“This framework provides a structured pathway for responsible AI adoption in criminal justice, but frameworks alone are not sufficient to guarantee good outcomes,” the Task Force said. “Important questions remain about what infrastructure is needed to make the full recommendations embedded in this framework accessible and considerate of the ways that AI uses may evolve.” 

To that end, the report identifies key areas to consider in future work, including the role of federal guidance and support; who should provide technical assistance; how courts, legislatures, and civil society can strengthen oversight and accountability; and how to prepare for more advanced AI capabilities. 

In addition, the framework is designed primarily for evaluating purpose-built AI acquired through formal procurement, such as a police department’s report writing software. But the Task Force noted that criminal justice staff are increasingly using general-purpose AI tools, such as AI chatbots and agents, in their day-to-day work for tasks like research or document drafting, which presents distinct governance challenges. The Task Force recommends that agencies develop a specific policy governing staff use of general-purpose AI tools for case-related work. 

Later in 2026, the Task Force will present a series of use-case studies that demonstrate the framework in action across different Al applications and agency contexts. These case studies will serve as implementation playbooks that agencies and communities can use to see how the framework may apply to specific types of AI tools. 

About the CCJ Task Force on Artificial Intelligence 

Launched in June 2025, the CCJ Task Force on Artificial Intelligence is a national, nonpartisan initiative to develop standards and evidence-based recommendations to guide the effective and ethical use of AI in the criminal justice system. The Task Force released a set of guiding principles in October 2025.  

 The Task Force’s 15 members represent AI technology developers and researchers, police executives and other criminal justice practitioners, civil rights advocates, community leaders, and formerly incarcerated people. Its work is supported by CCJ staff and researchers at RAND, a leading research organization with extensive expertise in criminal justice and emerging technologies. 

Support for the Task Force comes from the Heising-Simons Foundation, The Just Trust, Microsoft, Southern Company Foundation, and The Tow Foundation, as well as the John D. and Catherine T. MacArthur Foundation and other CCJ general operating contributors. 

About the Council on Criminal Justice 

The Council on Criminal Justice is a nonpartisan think tank and invitational membership organization that advances understanding of the criminal justice policy choices facing the nation and builds consensus for solutions that enhance safety and justice for all. 

Recent Posts

Task Force on AI

Assessing AI for Criminal Justice: A User Decision Framework

Criminal justice agencies and practitioners face urgent questions about how to effectively and safely adopt AI tools. This decision guide provides a detailed and structured pathway to help them evaluate, implement, and oversee AI in the criminal justice system.