RAI Risk Assessment
Summary: An RAI risk assessment is conducted to measure the likelihood and consequence of the potential RAI risks associated with the development and use of AI systems.
Type of pattern: Governance pattern
Type of objective: Trustworthiness
Target users: Management teams
Impacted stakeholders: Employees, AI users, AI impacted subjects, AI consumers
Lifecycle stages: All stages
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.
Context: Despite the widespread adoption of various AI domains, there are many concerns that the potential failures of complex and opaque AI systems may have significant negative consequences for individuals, organizations, and society, and may cause more harm than benefit. RAI laws and regulations are still in their early stages. RAI risks can occur at any stage of the AI system’s lifecycle, cross-cutting AI components, non-AI components, and data components.
Problem: How can we assess the RAI risks associated with AI systems?
Solution: An organization needs to design an RAI risk assessment framework or extend the existing IT risk assessment framework to include ethical considerations for AI systems. The RAI risk assessment should be adaptable to effectively address domain-specific risks (such as those in the military or healthcare domains) and emerging risks in constantly evolving AI systems. The RAI risk assessment framework should be co-designed with key stakeholders, including the RAI risk committee, development teams, and prospective purchasers, in a dynamic, adaptive, and extensible manner that takes into account various context factors, such as culture, application domains, and automation levels. The risk assessment process can be effectively guided by checklists or questions. To avoid subjective
views on risk assessment outcomes, it is important to incorporate concrete risk metrics and measurements in calculating the risk assessment score.
Benefits:
- Enabled oversight: Conducting a RAI risk assessment is a crucial step in the process of RAI governance to enable oversight.
- Identification of RAI risks: RAI risk assessment helps to identify potential RAI risks associated with the development and use of AI systems.
- Enforced controls: Once the RAI risks have been identified, organizations can implement controls to mitigate and manage those risks.
Drawbacks:
- Subjective view: The assessment process may involve subjective judgement when rating the risk, particularly for principles that are difficult to quantify.
- One-off assessment: Currently, the RAI risk assessment is often performed as one-time events, rather than being integrated into ongoing risk management processes.
- Overemphasis on risk assessment: It is essential to obtain a balance between risk assessment and mitigation. While risk assessment provides valuable insights into the likelihood and consequence of potential risks, it may inadvertently get attention and resources away from proactive risk mitigation efforts.
Related patterns:
- RAI risk committee: Conducting an RAI risk assessment is an important responsibility for an RAI risk committee.
- Standardized reporting: The results of ethical risk assessment should be reported to RAI governors.
- Failure mode and effects analysis (FMEA): FMEA is an RAI risk assessment method that is commonly used in the development process and product design.
- Extensible, adaptive and dynamic RAI risk assessment: The ethical risk assessment framework of an organization can be designed in an extensible, adaptive, and dynamic way.
Known uses:
- ISO/IEC JTC 1/SC 42 committee is developing ISO/IEC 23894 on Artificial Intelligence and Risk Management.
- NIST released the initial draft of AI Risk Management Framework that provides a standard process for managing risks of AI systems.
- The Canada government has released the Algorithmic Impact Assessment tool to identify the risks associated with automated decision-making systems.
- The Australian NSW government is mandating all its agencies that are developing AI systems to go through the NSW AI Assurance Framework.
- CSIRO has developed a question bank for AI risk assessment. The questions were extracted from five major AI risk assessment frameworks.