RAI Standards

Summary: RAI standards are often presented in the form of voluntary documents that provide specifications, procedures, and guidelines to develop responsible AI systems.

Type of pattern: Governance pattern

Type of objective: Trustworthiness

Target users: RAI governors

Impacted stakeholders: AI technology producers and procurers, AI solution producers and procurers, RAI tool producers and procurers

Lifecycle stages: All stages

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: AI systems often use data or components from multiple jurisdictions, which may have different regulatory requirements on their use. For example, one jurisdiction may have more strict rules around data privacy, whereas another may have more lenient rules. These differences can create conflicting requirements for the use of the data or components within the AI system, which can make it difficult to ensure that the AI system is compliant with all relevant laws and regulations.

Problem: How can we ensure AI systems are trustworthy while avoiding the interoperability issue between jurisdictions?

Solution: To facilitate interoperability between jurisdictions, RAI standards can be developed to describe repeatable processes for creating responsible AI systems that are recognized internationally. The process of developing AI standards usually begins with the submission of a development proposal by the professional community. Once a proposal is approved, it is assigned to a technical committee, which manages working groups to draft the standard. The draft standard is then made available to the public for comments before it is finalized and released. RAI standards can be used on a voluntary or mandatory basis, and may be referenced in AI legislation by governments.

Benefits:

  • Consistency: By providing a consistent statement about the level of trustworthiness users can expect from the AI systems, RAI standards can help ensure that users can trust the AI systems that meet the standards.
  • Interoperability: RAI standards can facilitate the interoperability between different regulation approaches and ensure that AI systems are developed and used responsibly across jurisdictional boundaries.

Drawbacks:

  • Barrier for innovation: Once organizations begin following an RAI standard, there may be less emphasize on new design or process methods.
  • Difficulty of modification: It may be challenging to modify a well-adopted standard if issues are identified or if new AI technologies require updates to the standards.

Related patterns:

  • RAI certification: Ethical certification can be granted to AI systems that meet certain RAI standards and demonstrate their commitment to AI ethics principles.

Known uses: