AI you can trust, thanks to new reusable guidelines

June 29th, 2022

Over the last few years the tech community has seen a flux of regulations and principles for the creation of responsible artificial intelligence (AI) come online. QINGHUA LU and colleagues have set out to transition these principles from theory to practice. They're building guidelines that will provide concrete, reusable advice for developers, technologists, management, and regulators.

Hi Qinghua! Tell us about your research journey and how you found yourself working in responsible innovation.

A woman with dark hair and, smiling at the camera. She is wearing glasses with rectangular frames. She is standing against a brick wall.

Dr Qinghua Lu is leading a project with the Responsible Innovation Future Science Platform to develop operationalised concrete reusable design, development and government guidelines for achieving responsible AI.

I started my research journey in 2009 during my PhD studies at University of New South Wales and NICTA [the latter merged with CSIRO’s Digital Productivity business unit in 2015 to form CSIRO’s Data61]. I’m leading the Software Engineering for AI research team and Responsible AI science team at Data61. I’ve been working on responsible AI since 2020. We conducted an empirical study on responsible AI in 2020 and found there are many interesting research problems in this area. One important missing element is operationalised guidelines for different levels of stakeholders of AI systems (i.e. developers, technologists, management, regulators and end-users). We found this is well aligned with the scope of the Responsible Innovation (RI) Future Science Platform (FSP). So we prepared a proposal and received strong support from the RI FSP Leader Dr Justine Lacey, Data61 Director Dr Jon Whittle, and Data61 Research Director Dr Liming Zhu.

Your current research project is seeking to create design, development and governance guidelines for responsible AI. Has this evolved out of your experience as a software engineer researcher?

I’ve been working on architecture design of blockchain-based software applications since 2016. Blockchain is considered a “trust enabler” for next generation software applications. Blockchain can increase confidence in a software platform because it relies on trusting a distributed web of actors. We have worked on many blockchain projects at Data61, for example, the Department of Foreign Affairs and Training (DFAT) ePhyto blockchain project, plus others – the Laava Blockchain Project and Hydrogen Accreditation project. We have built and maintained a “blockchain patterns” website since 2020 which attracts around 1000 visitors per month.

It’s great to hear the conversation is now shifting toward concrete solutions for responsible AI.

Yes, many ethical regulations, principles, and guidelines for responsible AI have been issued recently. However, these principles are high-level and difficult to put into practice. In the meantime much effort has been put into responsible AI from the algorithm perspective, but they are limited to a small subset of ethical principles amenable to mathematical analysis. Responsible AI issues go beyond data and algorithms and are often at the system-level. Responsible AI cross-cuts many system components and the entire software engineering lifecycle. That’s the specific problem we’re trying to address: by providing developers and other stakeholders with operationalised concrete reusable design, development and government guidelines for achieving responsible AI.

What is your team’s vision for the the end product?

We aim to develop concrete and operationalised guidelines in the form of a pattern catalogue for developers and other stakeholders to use to build AI systems in a responsible way. The pattern catalogue includes a collection of governance patterns, process patterns, and product patterns. The governance patterns are to ensure the development and use of AI systems are compliant to ethical regulations and standards. The idea is that they can be organised into multiple levels, including industry-level (e.g. ethical certification), organisation-level (e.g. ethical risk assessment framework), and team-level (e.g. diverse team).

A diagram that illustrates the governance component of the reusable AI guidelines at industry, organisational and team levels.

The reusable guidelines will include a collection of governance patters, which aim to ensure the development of responsible AI at team, organisational and industry levels.

The process patterns are best practices that can be incorporated into development processes, so the developers could consider to apply them during the development. The product patterns can be embedded into the AI systems as product features to contribute to responsible-AI-by-design. These patterns can be used to reduce the ethical risk, but moreover become a competitive advantage of the AI product.

The project kicked off late last year and you’ve already had several papers published this year. Can you share some of your key learnings to date?

To identify the patterns for responsible AI, we have been performing a systematic mapping study. The main data sources include ACM Digital Library, IEEE Xplore, Science Direct, Springer Link, Google Scholar, and Google to cover both academic and industrial responsible AI solutions.

While our guidelines are mainly targeted towards developers, technologists, management, regulators, we know that the technology build is often informed by the wider social milieu and research into user preferences: technology doesn’t operate in a vacuum, and it increasingly involves cross-disciplinary teams. That’s why we made a conscious decision to embark on this project with a multidisciplinary project team. We’re drawing on expertise in software engineering, machine learning, social science, human-machine interaction, and user experience.

The social scientists in our team are currently seeking to understand what actually contributes to people trusting in AI. For example, questions like, “in which context/s would someone trust that technology?” and “how can we validate whether that pattern is really creating trust for users?”

Can you share when the guidelines might be available?

We’ve set up a very early version of the pattern catalogue website and are currently adding solution description details for the collected patterns. We are also writing a Responsible AI book which will contain the pattern catalogue. The book proposal has been accepted by Pearson Addison-Wesley and is planned for publication early next year. We’re hoping the pattern catalogue will be the first development guidelines for academic researchers and industrial practitioners to achieve responsible AI and help the industry unlock the AI market.