Operationalising Responsible AI

October 14th, 2022

Artificial Intelligence (AI) has been transforming our society and listed as the top strategic technology in many organisations. Although AI has huge potential to solve real-world challenges, there are serious concerns about its ability to behave and make decisions in a responsible way.

Artificial Intelligence (AI) has been transforming our society and listed as the top strategic technology in many organisations. Although AI has huge potential to solve real-world challenges, there are serious concerns about its ability to behave and make decisions in a responsible way. Compared to traditional software systems, AI systems involve higher degree of uncertainty and more ethical risk due to its autonomous and opaque decision making. Responsible AI refers to the ethical development of AI systems to benefit the humans, society, and environment. The concept of responsible AI has attracted huge attention from governments, organisations, and companies.

To address the responsible AI challenge, a number of AI ethics principles frameworks (e.g., Australia’s AI Ethics Principles) have been published recently, which AI systems are supposed to conform to. There have been a consensus around the AI ethics principles. A principle-based approach allows technology-neutral, future-proof and context-specific interpretations and operationalisation. However, without further concrete tools and technologies, practitioners are left with nothing much beyond truisms. In addition, significant efforts have been put on algorithm-level solutions which mainly focus on a subset of mathematics-amenable ethical principles (such as privacy and fairness).  However, ethical issues can occur at any step of the development lifecycle crosscutting many AI, non-AI and data components of systems beyond AI algorithms and models.

In this project, we will examine the entire development lifecycle of AI systems, from the planning stage to the monitoring stage. Fig. 1 illustrates potential ethical issues that can occur at each stage of the AI system lifecycle.

Figure 1. Potential ethical issues at each stage of AI system lifecycle.

Objective

The objective of this project is to develop innovative software engineering tools and technologies that developers and other stakeholders can use to make both AI systems and their development processes trustworthy.

Figure 2. Risk-based approach.

Figure 3. Score card for risk assessment.

Approach

We take a risk-based approach which includes three main activities:

  1. Assessment: the first step is to meet with AI project teams and do an ethical risk assessment against AI ethics principles using our own ethical risk assessment tool and recommend mitigation strategies based on our Responsible AI Pattern Catalogue and other well-known industry guidelines.
  2. Intervention: Once we all agree on the recommended mitigation strategies, we will then develop innovative tools and technologies to address the ethical risks.
  3. Evaluation: We adopt those tools and technologies and then evaluate their effectiveness.
Figure 2. Overview of the project.
Figure 3. Overview of ethical risk assessment tool.
Figure 4. KG-supported question-answering tool for ethical risk assessment and mitigation recommendation.

Deliverables

  • End-to-end/top-to-bottom ethical risk assessment tool (see Fig. 3): The questions in the question bank will be mapped into different levels of stakeholders. The questions will be designed based on four dimensions: who will ask the questions to do ethical risk assessment, who will answer the questions or ask sub-questions to lower-level stakeholders, which principle is the question about, which stage of the system lifecycle is the question related to. Patterns will be turned into selectable and recomposable risk mitigations with some further quantification beyond qualitative consequences on the back of risk assessment.
  • Knowledge graph supported question-answering tool for risk assessment and mitigation recommendation (see Fig.4)
  • Selection tool for comparing and choosing the right ethical risk assessment tools in the market
  • [ Tools/technologies delivered by the future WPs ]

Project Team

Qinghua Lu (Project Lead), Harsha Perera (WP1 Lead), Pip Shea (WP2 Co-lead), Didar Zowghi (WP2 Co-lead), Georgina Ibarra, Chen Wang, Zhenchang Xing, Frank Sun, Mengyu Chen, Carolyn Huston, Rob Dunne, Thierry RakotoariveloJiajun LiuVolkan Dedeoglu

Papers

  1. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Didar Zowghi, Aurelie Jacquet. Responsible AI Pattern Catalogue: A Multivocal Literature Review. arXiv preprint arXiv:2209.04963, 2022.
  2. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Zhenchang Xing. Towards a Roadmap on Software Engineering for Responsible AI. ACM/IEEE 1st International Conference on AI Engineering (CAIN’2022). ACM SIGSOFT Distinguished Paper Award.
  3. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle. Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems. arXiv preprint arXiv:2203.00905, 2022.
  4. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, David Douglas, Conrad Sanderson. Software engineering for responsible AI: An empirical study and operationalised patterns. arXiv preprint arXiv:2111.09478, 2021.

Contact

Qinghua Lu   Email: qinghua.lu@data61.csiro.au