Search

Event Recording: AI in Criminal Justice – Navigating Opportunities and Risks

On December 4, 2024, the Council on Criminal Justice hosted a live webinar on the risks and opportunities related to using artificial intelligence in the criminal justice system. Facilitated by CCJ Senior Fellow Jesse Rothman, the panel included:

  • Brandon del Pozo, assistant professor of medicine and public health at Brown University
  • Yasser Ibrahim, senior vice president of research and development at Axon
  • Nancy La Vigne, director of the National Institute of Justice, executive director of CCJ’s Task Force on Policing, and member of CCJ’s Task Force on Federal Priorities
  • Rebecca Wexler, faculty co-director of the Berkeley Center for Law and Technology and assistant professor of law at UC Berkeley School of Law

Highlights from the Conversation

Brandon del Pozo - Assistant Professor of Medicine and Public Health, Brown University

Brandon
del Pozo

Yasser
Ibrahim

Nancy La Vigne - Executive Director, CCJ Task Force on Policing

Nancy
La Vigne

Rebecca
Wexler

On Risks of Bias

  • Nancy La Vigne: “One of the biggest risks I see is with the data being fed into the algorithm — the quality of that data and the degree to which the data carries biases in and of itself that have nothing to do with the algorithm. I feel like [the debate] tends to all focus on AI, rather than looking backstream. Even with AI, humans will always need to maintain their mental capacity and interrogate and scrutinize the inputs in ways that may or may not be happening right now.”
  • Yasser Ibrahim: “It’s not that the model is biased. We are the human being; we are biased, and we leave a very long trail of our biases and bigotry and racism. We leave that on the internet. So when you train a model on the internet and you try to make it representative by seeing all of the views, well, a lot of our views are biased. So now you get a model, again, that is kind of reflective of that bias.”
  • Brandon del Pozo: “One of the risks of AI is that AI is not introspective. It doesn’t take a step back and ask, ‘Well, why am I being asked to look into this?’ It will look into whatever it is instructed to look into… One of the biggest risks of AI lies with us. It is the normative questions about what we are going to use this powerful tool for.”

On Transparency and Liability

  • Ibrahim: “There is an existing uncertainty in the industry where we don’t know where to go to ask what is right or wrong about AI in the U.S.”
  • del Pozo: “I think we also need to figure out how to assign and mediate blame for mistakes made by AI. When a person makes a mistake, we have pretty good intuitions about how to assign blame and what the procedure should be. When we offload decisions to AI, and AI makes a mistake, is it just a matter of strict liability, is it some type of insurance payout, is somebody to blame for writing the faulty algorithm?”
  • Rebecca Wexler: “One uncertainty I have, that maybe is also a policy that could be durable, is how are we going to get peer review of AI technologies used in the legal system? And I’m concerned about this especially because there have been incidences where vendors of forensic technologies have used contract law to prevent researchers from accessing executable copies of their software.”
  • La Vigne: “Who decides when the risk is minimal enough for any kind of AI application? One [issue] that has been around for a long time and remains highly controversial is around the use of risk assessment tools, especially when they are [being used to make] … decisions about whether someone should be detained pretrial or released on their own reconnaissance and for other kinds of criminal justice decision-making that is affecting individual lives… I think for each AI application we have to think differently about the balance of risk and benefit.”

On Disproportionate Benefits

  • Wexler: “I think there is a risk that new AI tools will be developed to aid some sections in the criminal legal system and not others… There is a business model and you are building tools for folks who will buy them and those who have resources. I worry specifically about how law enforcement’s resources and prosecutors’ resources may be greater, and they may have greater purchasing power than criminal defense purchasing power. That might distort the system if tools are developed for one side and not the other.”

Opportunities for Policy and Practice

  • del Pozo: “The opportunities are to offload a lot of what would otherwise be subjective and really overwhelming decision making to better use resources … This ability to take scarce resources and use them in high-payoff ways is a tremendous potential for AI… AI allows the machine to pick up salient facts, salient data, make salient judgements that really result in higher quality investigations, higher quality prosecutions, and higher quality defenses as well.”
  • Wexler: “I think there is an opportunity from a policy perspective – because I’m a law professor – that all this attention on AI might actually give us an opening to reform some broader problems that occur maybe with AI but also more generally in the criminal justice system.”

    And so here’s an example: I was speaking this spring to the Federal Advisory Committee on the Federal Rules of Evidence about proposals to alter the rules of evidence specifically to accommodate risks of deepfakes being introduced. So this was an AI-specific proposal. And I actually opposed it; I don’t think we need a specific proposal there but the concerns around deepfakes are an opportunity to reevaluate our authentication rules more generally. Maybe there really is a risk that a lot of counterfeit evidence is coming into the courts and that our rules of evidence aren’t up to the task. So, AI in this strange way maybe gives us an opportunity to maybe r-evaluate more generally what’s working and not.”

About CCJ's Initiative on Artificial Intelligence

The convergence of artificial intelligence (AI) and criminal justice presents significant challenges and opportunities. AI has the potential to enhance efficiency, fairness, and effectiveness across the criminal justice system. However, these advancements also raise significant, understudied, and undertheorized concerns related to privacy, bias, accountability, system capacity, and public oversight. By grounding the conversation around AI and criminal justice in evidence, data, collaboration, and a commitment to public welfare, the Council on Criminal Justice is helping establish a foundation for ongoing engagement and policy development that aims to ensure AI is used responsibly and effectively to advance justice, equity, liberty, and safety.

Recent Posts

UpClose With Jamila Hodge
UpClose

UpClose With Jamila Hodge

This month’s spotlighted member is Jamila Hodge, executive director of Equal Justice USA. In her interview, Hodge discusses how the criminal justice field addresses violent crime and the importance of identifying solutions that address the underlying drivers of violence.

Read More »
Crime Trends Working Group

Trends in Carjacking: What You Need to Know

This analysis examines carjacking trends from 2018-2023 and compares them with trends for non-carjacking robbery and motor vehicle theft. It also explores offense characteristics for carjacking, including the use of firearms as well as victimization and case clearance rates.

Read More »

Join our
mailing list