Possible roles of explanation in the design of automated decision aids

April 14th, 2022

Date: Nov 24th 13.00-13.50 AEDT

Speaker: Professor Liz Sonenberg

Title: Possible roles of explanation in the design of automated decision aids

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/e2ebd79f2ef8103abff7005056bafab6/playback

Bio: Liz Sonenberg is a Professor of Information Systems at the University of Melbourne. She holds the Chancellery roles of Pro Vice Chancellor Research Systems and Pro Vice Chancellor Digital & Data, and is active in teaching and research in the Faculty of Engineering and Information Technology. Across these roles her responsibilities include oversight of the maturing of the array of business systems that support the University research enterprise, and guiding the University’s strategic digital and data governance and policies.  Liz is a member of the Advisory Board of AI Magazine, and also a member of the Standing Committee of the  One Hundred Year Study on Artificial Intelligence (AI100). Her currently active research projects include “Strategic Deception in AI” and “Explanation in AI”.

Abstract: Automated decision aids are generally intended to compensate for the inadequacies of the human decision maker and improve the quality of decisions made. But the presence of such decision aids can foster automation bias, i.e. overreliance on their advice. I will reflect on possible roles of explanation in automation-supported decision making, describe some investigations of the effect of explanations on automation bias, and briefly discuss related considerations in the design of AI systems intended to generate deceptive behaviours.