“Making ‘black-box’ machine learning easily understandable and useable by domain experts in order to make high quality decisions confidently.”
“Be informed and be involved in ML-based decision making.”
ML research field has a frequent lack of connection between ML research and real-world impact because of complexity of ML models. For instance, for a domain expert who may not have expertise in ML or programming, an ML algorithm is as a “black-box”, where the user defines parameters and input data for the “black-box” and gets output from its running. It is difficult for users to understand the complicated ML models, such as what is going on inside ML models and how to accomplish with the learning problem. As a result, users are uncertain for ML results and this affects the effectiveness of ML methods. Data61’s research concerns the transparency and feedback in ML in order to:
Making ML transparent research will help to formulate guidelines/standards for the user interaction design of ML-based intelligent applications. As a result, ML results from transparent ML will help end users make high quality decisions confidently.
Data61’s research focuses on making the ML process understandable and usable by end users through evaluating end users’ experiences using HCI techniques. This includes following steps:
With Data61’s approach, ML models are evaluated based on decision quality instead of ML results directly, which is more acceptable by both ML researchers and domain experts.
This project builds on Data61’s long vision in ML, and relates to a number of other projects undertaken by our research team, including:
People: Fang Chen (Technical contact), Jianlong Zhou, Constant Bridon, Yang Wang, Ronnie Taib, Ahmad Khawaji, Zhidong Li