Trust in CINTEL

Understanding which contextual factors may contribute to, influence or erode how trust in CINTEL is established and maintained.
The challenge
Collaborative intelligence requires humans to trust the technology with which they are collaborating. If people do not trust a technology they may avoid using it, or use it in a way which limits its overall effectiveness, for example by withholding relevant information. Too much trust can also be a problem, as humans may not be sufficiently watchful for mistakes made by an artificially intelligent technology. Designing CINTEL systems and workflows in ways which promote the appropriate level of trust between human and machine collaborators is a key part of the CINTEL challenge.
Our response
We have developed a Framework for trust in collaborative intelligence systems that builds on research on trust in traditional artificial intelligence and incorporates relevant aspects of research on trust in human teams. This Framework defines:
“Inputs” (characteristics of the user, technology, and context that influence trust)
“Outputs” (goals that trust enables)
“Processes” (how we move from our inputs, to mediators like trust, to outputs)
You can read more about the framework in our paper: http://arxiv.org/abs/2404.01615

Using this Framework, we can develop context specific models or “recipes” of trust in specific CINTEL application. To inform the “ingredients” for such a recipe, we have developed the “Trust Pantry”. The Trust Pantry is an interactive and searchable database capturing empirically verified ingredients for trust from the scientific literature. It contains information on the type of AI application under study (e.g. autonomous vehicle, decision-aid), the ingredient or trust factor tested (e.g. transparency, human expertise, risk), classification of that factor as relating to the human, the technology or the task/environment, and the nature of the relationship between that factor and trust (e.g. positive, negative). Researchers and developers can use the Trust Pantry to:
– Explore the factors that are relevant to trust in a particular AI application (e.g. autonomous vehicle)
– Find out what is known about the relationship between trust and a particular factor (e.g. transparency)
– Investigate all the factors shown to increase or decrease trust

Finally, for better measurement of trust in AI and CINTEL, we have conducted the first comprehensive psychometric validation of the commonly used Trust in Automation Scale Across (TIAS), and across two studies tested the capacity of the scale to effectively measure trust across a range of AI applications. Furthermore, we have developed and validated a 3-item version of the scale, the S-TIAS. Publication under review.
Impact
Our CHAI-T framework and Trust Pantry can inform the design the design and development of trustworthy CINTEL systems and enable active management of trust in these systems. By understanding the levers that increase or reduce trust in these technologies we will have the tools to achieve and maintain the level of trust that is necessary to maximise the performance and safety of a collaborative human-AI team.
External Collaborators
IRL CROSSING
- French Australian Laboratory for Humans-Autonomous Teaming
Internal collaborators
Responsible Innovation Future Science Platform: the project is co-funded by the RI FSP