Trust in Machine Learning and Law Enforcement

November 11th, 2020

Machine Learning and Responsibility in Criminal Investigation

Project Duration: December 2019 – October 2020

Close up of Asian women with hi tech digital technology screen over the eye.

Credit: iStock.com/kontekbrothers

Responsibility and Trust in Machine Learning for Investigators

The Challenge

The form of artificial intelligence (AI) called machine learning (ML) is increasingly used in criminal justice settings. Using ML may allow law enforcement agencies to be more efficient and more effectively utilise large amounts of data in performing criminal investigations.

One question raised by using ML in criminal justice settings is how much trust investigators should place in the recommendations made by ML systems. Many methods have been developed for promoting fairness, transparency and accountability in the predictions made by ML systems. However, a technical approach to these problems needs to be accompanied by a human-centred approach to user trust. In order to address social, ethical and practical issues, these systems need to present information in such a way that the people who use them can make balanced decisions on whether or not they should trust them. The role, responsibilities and accountability of criminal justice experts must also be examined to understand how they will be affected by the use of ML systems in criminal investigations.

Our Response

A research study in ML and trust was led by CSIRO’s Data61 Investigative Analytics program in collaboration with the Responsible Innovation FSP. Through a combination of user experience and social science approaches, the team examined concepts of ML, trust in automation, and criminal investigation to explore how the level of trust investigators place in the findings of ML systems might be calibrated to reflect the actual trustworthiness of those systems.

Researchers reviewed literature across the diverse topics of ML, criminal investigation, trust and responsibility to address the issues raised by using ML systems in the context of criminal justice and law enforcement. The research aims to describe the responsibilities of criminal investigators, and how these responsibilities may be affected by using ML. It also aims to identify the factors that influence how much users trust automated systems, and how issues such as communicating uncertainty can affect the trustworthiness of ML systems.

Project Impacts

This project seeks to improve our understanding of how an ML system may be designed to help expert users in criminal justice settings confidently interpret different types of data and determine how trustworthy its outputs are. By examining the social identities and dynamics of users, such as investigators and intelligence analysts, we can better understand some of the factors that affect the trust these specific types of users place in ML systems. This level of trust can then be correlated with the systems’ actual capabilities. The findings will provide a foundation for examining potential practices that may mitigate some of the risks of deploying ML in a range of high-risk, high-consequence decision-making environments.

Team

CSIRO: Georgina Ibara (Project Leaders), David Douglas; and independent specialist: Meena Tharmarajah.

Additional information about the project are available:

CSIRO DATA61 News article Designing trustworthy machine learning systems: https://algorithm.data61.csiro.au/designing-trustworthy-machine-learning-systems/

CSIRO report, Machine Learning and Responsibility in Criminal Investigation: https://publications.csiro.au/publications/#publication/PIcsiro:EP205485