Addressing limitations of AI systems in criminal justice

Princeton's Center for Information Technology Policy and Deepmind's Ethics and Society team organized a workshop to explore the inherent limitations of Artificial Intelligence (AI) tools in the US criminal justice system. We organized the workshop with the aim to create some new and helpful resources for agencies that procure predictive tools and automated decision support systems within the criminal justice system. This is a response to the ever-increasing catalogue of examples where AI-enabled tools and decision-support systems in courts and police forces led to unintended and adverse effects. To this end, we invited 75 representatives from civil society advocate organizations, companies building these systems, government representatives, and academics who study this field. This blog is based on the report-back presentation I gave at the Partnership on AI's meeting in San Francisco in November 2018, where I shared only three take-aways.

AI take-aways

First, AI systems in public service and criminal justice are ill-defined, mystified, and therefore misunderstood. Systems are being sold to policymakers as solutions to their problems, without discussing the inherent statistical or computational limitations, or an understanding that these systems do not, in fact, replace human intelligence. Criminal justice reform group typically do to have the technical capacity or resources to understand the newly-proposed AI systems and thus see them as the next efficiency-enhancing tool of a fundamentally oppressive system. And finally, developers of these systems do not engage with the effect of the system on individuals, institutions, and wider society. We aim to develop resources to support all three groups.

Second, the importance of this work was highlighted by a comment from a person who spent several years in prison and is now campaigning for criminal justice reform. They mentioned that policymakers and judges don't listen to ex-cons when they criticize the use of AI systems. However, they do listen to academics and vendor companies or consultants. Therefore, they pleaded for the workshop participants to also speak on behalf of millions of defendants and prisoners who are subjected to these systems. Workshop participants recognized a real duty of care for vulnerable data subjects of AI systems are asked for guidance to take this role seriously.

Finally, a group of reform activists discussed real, actionable, and measurable uses of AI systems for positive social impact. The group came up with some examples that would flip the capacity of AI systems on its head and inform social interventions rather than penal efficiency. For example, the group devised a system to learn from the way potential criminals or ex-cons engaged with social services. This would allow government agencies to optimize their resources and identify moments and people to reach out to specifically.

We will be publishing an outcome document soon. It will focus on informing government procurement processes, developing specific guidelines, creating targeted educational initiatives, emerging research question, and several more issues. If you're interested to engage in this process, or learn from the discussion in the US for your purposes in Europe, please let me know!

Bendert Zevenbergen Academic Liaison at Princeton University

Contact me

contact me