Understanding trust in collaborative human-AI teams: its importance, formation, and evolution

September 26th, 2022

There is growing interest in how humans and artificial intelligence (AI) can work together in teams that maximise the strengths of each.
Project duration: February 2022 – January 2025

Credit: Adobe Stock dTosh

The Challenge

There is growing interest in how humans and artificial intelligence (AI) can work together in teams that maximise the strengths of each. The new field of collaborative intelligence investigates ways to combine the powerful processing ability of AI with the adaptability, creativity and values of humans to develop novel and collaborative human-technology systems.

Trust is critical to establishing collaborative relationships, whether they’re between humans or humans and machines. Researchers have already identified that reliability and competence are necessary for humans to trust a system enough to use it. In this project we ask what additional factors might we need for humans to trust a system enough to collaborate with it?  Specifically, what factors contribute to the formation, maintenance, and calibration of human-machine trust? 

With this understanding in place researchers believe that human-AI teams can successfully work together to tackle challenges, even under ambiguous circumstances. The end result? Social, environmental, and economic benefits that can’t be achieved by humans or machines working alone.

Our Response

CSIRO’s Collaborative Intelligence and Responsible Innovation Future Science Platforms (FSP) are working together to develop a framework of trust in collaborative intelligence systems. The framework will identify and map the factors that are critical to the development, maintenance, and collaboration of human trust in these systems. The aim of this framework will be to identify and test which factors of trust are important, as well as, at what stage, and in what contexts.

The project builds on what is already known about trust both within human teams and trust that humans place in automation and AI. For example, we know that communication style and frequency is very important to trust in human teams. We also know that trust in AI is difficult to build back once it is lost (for example, if a system fails early on).

Quantifying the enablers and barriers to trust will allow our researchers to formulate a new theoretical model of trust in collaborative intelligence systems.  This proposed model will be empirically tested across a range of diverse use cases. The research team intends to use qualitative and quantitative research methods that will be drawn from social psychology and human factors.

Impact

This research will ensure that human expectations regarding the trustworthiness of collaborative human-AI systems informs the design and development of these systems from the outset. In this way our work also aims to support the adoption and effective use of collaborative intelligence approaches, and to enhance the productivity and performance of teams deploying these collaborative intelligence approaches.

A secondary goal of this research is to embed consideration and understanding of human attitudes and behaviour in the development of new technological capabilities. Doing so will ultimately strengthen CSIRO’s multidisciplinary capability in this area by bringing together experts in the social and computer sciences with domain expertise in the areas in which collaborative intelligence is to be deployed.  

Team

Melanie McGrath, Andreas Duenser

Find out more:

CSIRO : Collaborative Intelligence Future Science Platform

References

The Conversation article,  What’s the secret to making sure AI doesn’t steal your job? Work with it, not against it