Responsible AI Pattern Catalogue

Artificial Intelligence (AI) has been transforming our society and listed as the top strategic technology in many organizations. Although AI has huge potential to solve real-world challenges, there are serious concerns about its ability to behave and make decisions in a responsible way. Compared to traditional software systems, AI systems involve higher degree of uncertainty and more ethical risk due to its autonomous and opaque decision making. Responsible AI refers to the ethical development of AI systems to benefit the humans, society, and environment. The concept of responsible AI has attracted huge attention from governments, organizations, and companies. According to the 2022 Gartner CIO and Technology Executive Survey, 48% organizations have already adopted or plan to adopt AI technologies within the next 12 months while 21% of organizations have already deployed or plan to deploy responsible AI technologies within the next 12 months. Responsible AI has been widely considered as one of the greatest scientific challenges of our time and the key to unlock the market and increase the adoption of AI.

To address the responsible AI challenge, a number of AI ethics principles frameworks (e.g., Australia’s AI Ethics Principles) have been published recently, which AI systems are supposed to conform to. There have been a consensus around the AI ethics principles. A principle-based approach allows technology-neutral, future-proof and context-specific interpretations and operationalization. However, without further best practice guidance, practitioners are left with nothing much beyond truisms. For example, it is a very challenging and complex task to operationalize the the human-centered value principle regarding how it can be designed for, implemented and monitored throughout the entire lifecycle of AI systems. In addition, significant efforts have been put on algorithm-level solutions which mainly focus on a subset of mathematics-amenable ethical principles (such as privacy and fairness). However, ethical issues can occur at any step of the development lifecycle crosscutting many AI, non-AI and data components of systems beyond AI algorithms and models. To try to fill the principle-algorithm gap, further guidance such as guidebooks, questions to generate discussions, checklists and documentation templates do start to appear. Those efforts tend to be ad-hoc sets of more detailed prompts for practitioners to think about all the issues and come up with their own solutions.

Therefore, we adopt a pattern-oriented approach and build up a Responsible AI Pattern Catalogue for operationalizing responsible AI from a system perspective. In software engineering, a pattern is a reusable solution to a problem that occurs commonly within a given context in software development. Rather than staying at the ethical principle level or algorithm level, we focus on patterns that practitioners can undertake in practice to ensure that the developed AI systems are responsible throughout the entire software development lifecycle. As shown in Fig.1, the Responsible AI Pattern Catalogue classifies patterns into three groups:

  • Governance patterns for establishing multi-level governance for responsible AI;
  • Process patterns for setting up trustworthy development processes;
  • Product patterns for building responsible-AI-by-design into AI systems.

Each of the patterns is described following the extended pattern structure: summary, type of pattern, type of objective, target users, impacted stakeholders, lifecycle stages, relevant AI ethics principles, context, problem, solution, benefits, drawbacks, related patterns, known uses.

Fig.1. Overview of Responsible AI Pattern Catalogue.

AI System Stakeholders

As illustrated in Fig. 2, AI system stakeholders are classified into three groups:

  • Industry-level stakeholders
    • AI technology producers: those who develop AI technologies for others to build on top to produce AI solutions, e.g., parts of Google, Microsoft, IBM. AI technology producers may embed RAI in their technologies and/or provide additional RAI tools.
    • AI technology procurers: those who procure AI technologies to build their in-house AI solutions, e.g., companies or government agencies buying/using AI platform/tools. AI technology procurers may care about RAI issues and embed RAI into their AI technology procurement process.
    • AI solution producers: those who develop in-house/blended unique solutions on top of technology solutions and need to make sure the solutions adhere to RAI principles/standards/regulations, e.g., parts of MS/Google providing Office/Gmail “solutions”. They may offer the solutions to AI consumers directly or sell to others. They may use RAI tools (provided by tech producers or 3rd parties) and RAI processes during their solution development.
    • AI solution procurers: those who procure complete AI solutions (with some further configuration and instantiation) to use internally or offer to external AI consumers, e.g., a government agency buying from a complete solution from vendors. They may care about RAI issues and embed RAI into their AI solution procurement process.
    • AI users: those who use an AI solution to make decisions that may impact on a subject, e.g., a loan officer or a gov employee. AI users may exercise additional RAI oversight as the human-in-the-loop.
    • AI impacted subjects: those who are impacted by some AI-human dyad decisions, e.g., a loan applicant or a tax payer. AI impacted subjects may contest the decision on dyad AI ground.
    • AI consumers: those who consume AI solutions (e.g., voice assistants, search engines, recommender engines) for their personal use (not affecting 3rd parties). AI consumers may care about the dyad AI aspects of AI solutions.
    • RAI governors: those who set and enable RAI policies and controls within their culture. RAI governors could be functions within an organization in the above list or external (regulators, consumer advocacy groups, community).
    • RAI tool producers: those who are technology vendors and dedicated companies offering RAI features integrated into AI platforms or AIOps/MLOps tools.
    • RAI tool procurers: any of the above stakeholders who may purchase or use RAI tools to improve or check solutions/technology’s RAI aspects.
  • Organization-level stakeholders
    • Management teams: individuals at the higher level of an organization who are responsible for establishing RAI governance structure in the organization and achieving RAI at the organization-level. The management teams include board members, executives, and (middle-level) managers for legal, compliance, privacy, security, risk, and sustainability.
    • Employees: individuals who are hired by an organization to perform work for the organization and expected to adhere to RAI principles in their work.
  • Team-level stakeholders
    • Development teams: those who are responsible for developing and deploying AI systems, including product managers, project managers, team leaders, business analysts, architects, UX/UI designers, data scientists, developers, testers, and operators. The development teams are expected to implement RAI in their development process and embed RAI into the product design of AI systems.

Fig.2. AI system stakeholders.

Governance Patterns

We identify a set of governance patterns and classify them into industry-level governance patterns, organization-level governance patterns, and team-level governance patterns (see Fig.3). The target users of industry-level governance patterns are RAI governors, while the impacted stakeholders include AI technology producers and procurers, AI solution producers and procurers, RAI tool providers and procurers. For the organization-level patterns, the target users are the management team and the impacted stakeholders are employees. The target users of team-level patterns are the development team.

Fig.3. Governance patterns for responsible AI.

 

Industry-level governance patterns

Organization-level governance patterns

Team-level governance patterns

Process Patterns

We identify process-oriented patterns (i.e. best practices) that can be incorporated into development processes, so the developers could consider to apply them during the development lifecycle. Fig.4 describes the software development lifecycle and the potential ethical risks and breaches corresponding to each stage, while Fig.5 presents the summarized patterns for different stages.

Fig.4. Development process lifecycle and potential ethical risk.

Fig.5. Process patterns for responsible AI system development.

 

Patterns for requirement engineering stage

Patterns for design stage

Patterns for implementation stage

Patterns for testing stage 

Patterns for operation stage

Product Patterns

Product patterns provide a system-level guidance on how to design the architecture of responsible AI systems. Responsible-AI-by-design can be built into AI systems through the product patterns. Broadly, an AI system is comprised by three layers, including the supply chain layer that generates the software components which compose the AI system, the system layer which is deployed AI system, and the operation infrastructure layer that provides auxiliary functions to the AI system. Fig.6 presents the identified products patterns for each of the three layers. Those product patterns can be embedded into the AI ecosystems as product features. Fig.7 illustrates a state diagram of a provisioned AI system and highlights the patterns associating with relevant states or transitions, which show when the product patterns could take effect. Fig.8 gives a pattern-oriented responsible-AI-by-design reference architecture.

Fig. 6. Product patterns for responsible-AI-by-design architecture of an AI system.

 

Fig. 7. Product patterns for responsible-AI-by-design.

 

Fig. 8. Pattern-oriented responsible-AI-by-design reference architecture.

Our Papers

  1. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Didar Zowghi, Aurelie Jacquet. Responsible AI Pattern Catalogue: A Multivocal Literature Review. arXiv preprint arXiv:2209.04963, 2022.
  2. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Zhenchang Xing. Towards a Roadmap on Software Engineering for Responsible AI. ACM/IEEE 1st International Conference on AI Engineering (CAIN’2022). ACM SIGSOFT Distinguished Paper Award.
  3. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle. Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems. arXiv preprint arXiv:2203.00905, 2022.
  4. Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, David Douglas, Conrad Sanderson. Software engineering for responsible AI: An empirical study and operationalised patterns. arXiv preprint arXiv:2111.09478, 2021.

Contact

Qinghua Lu qinghua.lu@data61.csiro.au