Code for RAI

Summary: A code of RAI is a set of guidelines that employees within an organization are expected to follow when developing or operating AI systems.

Type of pattern: Governance pattern

Type of objective: Trustworthiness

Target users: Management teams

Impacted stakeholders: Employees, AI users, AI impacted subjects, AI consumers

Lifecycle stages: All stages

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: The adoption of AI is a central aspect of the digital transformation process for organizations. AI has been used across the value chain to create additional value. However, there is a risk that AI may make incorrect decisions or behave in a manner that is inappropriate, such as causing harm to humans or making poor purchasing decisions.

Problem: What are ways to guide AI-related activities within an organization?

Solution: A code of RAI is a set of principles and guidelines that guides employees in the development and use of AI systems. The code of RAI defines the intended purpose of these systems and the way that employees are expected to develop and use AI systems. For example, the code outlines the ethical boundaries that employees should not cross.


  • Guidance for employees: A code of ethics provides employees involved in AI related activities with clear guidance regarding the development and use of AI systems.
  • Same rules: When an organization has a code of ethics, everyone from the executive team to the development team follows the same rules, which helps achieve the organization’s value and establish the organizational culture.


  • High-level guidelines: While reading the code of RAI may provide employees with a general understanding of RAI principles and guidelines, it does not necessarily lead to a change in behavior.
  • Difficult to enforce: The black box nature of AI makes it extremely difficult to determine the specific factors that contribute to an AI system’s decisions and hold employees accountable for their actions when working with AI.

Related patterns:

Know uses: