Extensible, Adaptive and Dynamic RAI Risk Assessment

Summary: It is essential to perform continuous risk assessment and mitigation for RAI systems.

Type of pattern: Process pattern

Type of objective: Trustworthiness

Target users: Operators

Impacted stakeholders: Developers, business analysts, AI users, AI consumers

Lifecycle stages: Operation

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: The current risk-based approach to implement RAI often involves a done-once-and-forget algorithm-level risk assessment and mitigation for a sub-group of ethical principles, such as privacy or fairness, at a particular development step. However, this approach is inadequate for highly uncertain and continual learning AI systems. Furthermore, the context of AI systems can vary greatly across different application domains, organizations, cultures, and regions.

Problem: How can we measure the extent to which an AI system adheres to AI ethics principles in a given context?

Solution: It is essential to continuously perform risk assessment and mitigation for RAI systems. The RAI risk assessment framework can be built with specific extension points for different contexts, such as the culture context. The risk mitigation can be approached in three ways: reducing the frequency of occurrence, decreasing the size of consequences, and improving the response to consequences.


  • Better alignment with context: By considering various extension points (such as cultural context), the RAI risk assessment and mitigation process can be better aligned with the specific context in which the AI systems is operating.
  • Reduced legal and reputational risks: By continuously identifying and mitigating risks, the AI system can be less likely to violate laws and avoid reputational damage.


  • Limited measurability: It is hard to measure some of the ethics principles.

Related patterns:

  • RAI risk assessment: The ethical risk assessment framework of an organization can be designed in an extensible, adaptive, and dynamic way.

Known uses:

  • NIST is developing an AI Risk Management Framework to improve AI trustworthiness.
  • ISO/IEC JTC 1/SC 42 committee is developing ISO/IEC 23894 on Artificial Intelligence and Risk Management.
  • The Canada government has released the Algorithmic Impact Assessment tool to identify the risks associated with automated decision-making systems.
  • The Australian NSW government is mandating all its agencies that are developing AI systems to go through the NSW AI Assurance Framework.
  • OpenAI has introduced a range of risk measurement methods: 1) various safety measures to assess the likelihood of GPT-4 generating undesired outputs; 2) predicting future capabilities of models via scaling laws and emerging properties; 3) predicting acceleration risk by recruiting expert forecasters.