RAI Risk Assessment

Summary: An RAI risk assessment is conducted to measure the likelihood and consequence of the potential RAI risks associated with the development and use of AI systems.

Type of pattern: Governance pattern

Type of objective: Trustworthiness

Target users: Management teams

Impacted stakeholders: Employees, AI users, AI impacted subjects, AI consumers

Lifecycle stages: All stages

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: Despite the widespread adoption of various AI domains, there are many concerns that the potential failures of complex and opaque AI systems may have significant negative consequences for individuals, organizations, and society, and may cause more harm than benefit. RAI laws and regulations are still in their early stages. RAI risks can occur at any stage of the AI system’s lifecycle, cross-cutting AI components, non-AI components, and data components.

Problem: How can we assess the RAI risks associated with AI systems?

Solution: An organization needs to design an RAI risk assessment framework or extend the existing IT risk assessment framework to include ethical considerations for AI systems. The RAI risk assessment should be adaptable to effectively address domain-specific risks (such as those in the military or healthcare domains) and emerging risks in constantly evolving AI systems. The RAI risk assessment framework should be co-designed with key stakeholders, including the RAI risk committee, development teams, and prospective purchasers, in a dynamic, adaptive, and extensible manner that takes into account various context factors, such as culture, application domains, and automation levels. The risk assessment process can be effectively guided by checklists or questions. To avoid subjective
views on risk assessment outcomes, it is important to incorporate concrete risk metrics and measurements in calculating the risk assessment score.


  • Enabled oversight: Conducting a RAI risk assessment is a crucial step in the process of RAI governance to enable oversight.
  • Identification of RAI risks: RAI risk assessment helps to identify potential RAI risks associated with the development and use of AI systems.
  • Enforced controls: Once the RAI risks have been identified, organizations can implement controls to mitigate and manage those risks.


  • Subjective view: The assessment process may involve subjective judgement when rating the risk, particularly for principles that are difficult to quantify.
  • One-off assessment: Currently, the RAI risk assessment is often performed as one-time events, rather than being integrated into ongoing risk management processes.
  • Overemphasis on risk assessment: It is essential to obtain a balance between risk assessment and mitigation. While risk assessment provides valuable insights into the likelihood and consequence of potential risks, it may inadvertently get attention and resources away from proactive risk mitigation efforts.

Related patterns:

Known uses: