Trust in CINTEL
Understanding which contextual factors may contribute to, influence or erode how trust in CINTEL is established and maintained.
The challenge
Collaborative intelligence requires humans to trust the technology with which they are collaborating. If people do not trust a technology they may avoid using it, or use it in a way which limits its overall effectiveness, for example by withholding relevant information. Too much trust can also be a problem, as humans may not be sufficiently watchful for mistakes made by an artificially intelligent technology. Designing CINTEL systems and workflows in ways which promote the appropriate level of trust between human and machine collaborators is therefore a key part of the CINTEL challenge.
Our response
This project is developing a framework for trust in collaborative intelligence that builds on research on trust in traditional artificial intelligence, by incorporating relevant aspects of research on trust in human teams. We begin by investigating how elements of CINTEL systems, such as interdependence and complementarity, may differ from traditional AI systems and what implications this has for the development, maintenance and erosion of human trust. We use this understanding to develop and test a framework that identifies the user, technology, and contextual factors that contribute to the development of trust in a given CINTEL system. This framework also addresses how the actual processes of collaboration between human and AI may interact with these factors to influence the formation and maintenance of trust.
Impact
An empirically validated framework can inform the design the design and development of trustworthy CINTEL systems and enable active management of trust in these systems. By understanding the levers that increase or reduce trust in these technologies we will have the tools to achieve and maintain the level of trust that is necessary to maximise the performance and safety of a collaborative human-AI team.
External Collaborators
IRL CROSSING
- French Australian Laboratory for Humans-Autonomous Teaming
News
Other Initiatives
Responsible Innovation Future Science Platform: the project is co-funded by the RI FSP and Melanie McGrath is also working on a second project in the RI FSP led by Sarah Bentley, “Mapping the social dynamics of generative AI adoption and use”. This project is expected to inform our knowledge of how human socio-demographic characteristics relate trust in generative AI.
Science Digital/Sigma8: The trust project team are contributing to a survey of generative AI use within CSIRO along with other members of other CINTEL Foundation Science projects. This research is being conducted in partnership with Science Digital and IMT.