Explainable artificial intelligence: beware the inmates running the asylum (or How I learnt to stop worrying and love the social and behavioural sciences)
If you have missed it: check the recording
Apr 14, 2021 13.00 – 13.50
Speaker: Tim Miller https://people.eng.unimelb.edu.au/tmiller/
Title: Explainable artificial intelligence: beware the inmates running the asylum (or How I learnt to stop worrying and love the social and behavioural sciences)
In his seminal book “The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy and how to Restore the Sanity”, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a cross-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I will discuss some key theories on explanation in the social sciences, and will present some key examples of how we have used these in our research.
Tim is a professor of computer science and Deputy Head (Academic) in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics (https://law.unimelb.edu.au/centres/caide). His primary area of expertise is in artificial intelligence, with particular emphasis on human-AI interaction, and Explainable Artificial Intelligence (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology.