Mitigating ethical risks in the development of artificial intelligence tools for intensive care settings

December 20th, 2023

AI-powered tools hold great potential, but clinicians must be able to trust the explanations that AI gives for its results.
Medical monitoring machine in an intensive care unit in hospital

Monitoring of patient’s heart in intensive care unit.

The Challenge

There is enormous potential to integrate artificial intelligence (AI) into healthcare systems. One area where AI holds promise is in enhancing clinical decision support tools. However, a lack of trust by clinicians hinders the uptake of AI powered tools.

The development of explainable AI methods in clinical decision support tools to rationalise decisions are currently underway. But recent studies have reported disparities in the explanations obtained by state-of-the-art explainable AI methods. 

We can begin to address these challenges by understanding clinicians’ perspectives on these disparities, how they influence clinical decision-making, and how the technology can be developed to mitigate potential ethical risks.

Our Response

CSIRO’s Responsible Innovation Future Science Platform and the Australian e-Health Research Centre are collaborating to better understand the effectiveness of explainable AI methods through the lens of domain experts.

This project involves AI experts and health scientists working closely with clinical practitioners to unpack how they approach inconsistencies in explanations provided by explainable AI-powered clinical decision support tools. It will focus on predictive clinical decision support tools intended for deployment in intensive care units.

The project aims to understand and assess the role of explainable AI, its effect on decision process and how variations in concordance levels between the model output explanations affect clinical workflow and outline the fundamental principles needed for explainable AI-supported trustworthy clinical decision support tools.

Researchers will identify the role of explainable AI and effect of disparity, to develop helpful guidelines for identifying and mitigating critical risks. Specifically, this project seeks to:

  • Understand clinicians’ views on explainable AI and its role in the Australian healthcare context
  • Generate strategies to identify and mitigate potential risks caused by disparity in explainable AI-supported clinical decision support tools and identify other applications where these findings may apply.
  • Outline the fundamental principles needed for explainable AI-supported trustworthy clinical decision support

This project is the first of its kind to address the issue of explainable AI disparity and trustworthiness in AI-powered clinical decision support tool development, with the aim of fostering responsible innovation in healthcare. It will serve as a starting point to establish comprehensive guidelines that identify and mitigate critical risks in the development and use of explainable AI-supported clinical decision support tools. 

Impact

By bringing together the expert insights of clinicians and AI practitioners, the benefits of this research will flow in both directions. A better understanding of clinicians’ needs and values when working with their patients will help shape the development of AI-powered clinical decision support tools, which can be trusted by the clinicians who use them.

Incorporating robust clinician-derived guidelines will enhance the trustworthiness of these tools. It will mitigate risks in the design and use of these tools and support safe adoption across the broader healthcare industry to improve patient outcomes. This multidisciplinary co-design approach will also advance CSIRO’s capabilities in building fit-for-purpose and responsible AI-driven systems for the Australian healthcare sector.

Team 

Aida Brankovic (Project Leader), Jessica Rahman, Alana Delaforce, Farah Magrebi (External collaborator, Macquarie University), DanaKai Bradford, Jane Li

Further Information

Tonekaboni, S., Joshi, S., McCradden, M. D., & Goldenberg, A. (2019, October). What clinicians want: contextualizing explainable machine learning for clinical end use. In Machine learning for healthcare conference (pp. 359-380). PMLR.

Krishna S, Han T, Gu A, Pombra J, Jabbari S, Wu S, Lakkaraju H. The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602. 2022 Feb 3.1

Brankovic A, Huang W, Cook D, Khanna S, Bialkowski K. Elucidating Discrepancy in Explanations of Predictive Models Developed using EMR, MedInfo 2023, July 2023.

Brankovic A, Cook D, Rahman J, Huang W, Khanna S. Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?. arXiv preprint arXiv:2306.11985. 2023 Jun 21.

Cabitza F, Campagner A, Ronzio L, Cameli M, Mandoli GE, Pastore MC, Sconfienza LM, Folgado D, Barandas M, Gamboa H. Rams, hounds and white boxes: Investigating human–AI collaboration protocols in medical diagnosis. Artificial Intelligence in Medicine. 2023 Apr 1;138:102506.