AI System Governance Patterns

The governance for responsible AI systems can be defined as the structures and processes that are designed to ensure the development and use of AI systems are compliant to ethical regulations and responsibilities. As shown in Fig.1, the governance can be built into three levels based on Shneiderman’s [1] governance structure: industry-level, organization-level, and team-level.

 

Fig. 1 Stakeholders for RAI governance.

 

 

AI System Stakeholders

As illustrated in Fig. 1, AI system stakeholders are classified into three groups:

  • Industry-level stakeholders
    • AI technology producers: develop AI technologies for others to build on top to produce AI solutions, e.g., parts of Google, Microsoft, IBM. AI technology producers may embed RAI in their technologies and/or provide additional responsible AI tools.
    • AI technology procurers: procure AI technologies to build their in-house AI solutions, e.g., companies or government agencies buying/using AI platform/tools. AI technology procurers may care about responsible AI issues and embed responsible AI into their AI technology procurement process.
    • AI solution producers: develop in-house/blended unique solutions on top of technology solutions and need to make sure the solutions adhere to responsible AI principles/standards/regulations, e.g., parts of MS/Google providing Office/Gmail “solutions”. They may offer the solutions to AI consumers directly or sell to others. They may use responsible AI tools (provided by tech producers or 3rd parties) and responsible AI processes during their solution development.
    • AI solution procurers: procure complete AI solutions (with some further configuration and instantiation) to use internally or offer to external AI consumers, e.g., a government agency buying from a complete solution from vendors. They may care about responsible AI issues and embed responsible AI into their AI solution procurement process.
    • AI users: who use an AI solution to make decisions that may impact on a subject, e.g., a loan officer or a gov employee. AI users may exercise additional responsible AI oversight as the human-in-the-loop.
    • AI impacted subjects: who are impacted by some AI-human dyad decisions, e.g., a loan applicant or a tax payer. AI impacted subjects may contest the decision on dyad AI ground.
    • AI consumers: who consume AI solutions (e.g., voice assistants, search engines, recommender engines) for their personal use (not affecting 3rd parties). AI consumers may care about the dyad AI aspects of AI solutions.
    • RAI governors: those that set and enable responsible AI policies and controls within their culture. RAI governors could be functions within an organization in the above list or external (government policy, consumer advocacy groups, community).
    • RAI tool procurers: any of the above stakeholders who may purchase or use RAI tools to improve or check solutions/technology’s RAI aspects.
    • RAI tool/feature providers: Technology vendors and dedicated companies offering RAI features integrated into AI platforms or AIOps/MLOps tools.
  • Organization-level stakeholders
    • Management teams: individuals at the higher level of an organization who are responsible for managing the organization, including board members, executives, and (middle-level) managers.
    • Employees: individuals who are hired by an AI organization to perform work for the organization.
  • Team-level stakeholders
    • Development teams: those who are responsible for developing and deploying AI systems, including product managers, project managers, team leaders, business analysts, architects, UX/UI designers, data scientists, developers, testers, and operators.

Fig. 2 Governance patterns for responsible AI.

Multi-Level Governan Patterns

  • Industry-level patterns

Table 1. Governance patterns overview

 

References

[1] Ben Shneiderman. 2020. Bridging the Gap Between Ethics and Practice: Guide- lines for Reliable, Safe, and Trustworthy Human-Centered AI Systems. ACM Trans. Interact. Intell. Syst. 10, 4 (2020), 31 pages.