Failure Mode and Effects Analysis (FMEA)
Summary: FMEA involves conducting a bottom-up risk assessment to identify and analyze RAI risks.
Type of pattern: Governance pattern
Type of objective: Trustworthiness
Target users: Project managers
Impacted users: Development teams
Lifecycle stages: Requirements engineering, testing, operation
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.
Context: Ethical issues in AI systems are often identified through extensive simulation and testing during the later stages of development. However, this can lead to significant delays in project timelines and an increase in development cost. By identifying and addressing ethical issues early in the development process, development teams can mitigate the ethical issues and avoid costly delays.
Problem: How can we ensure the ethical quality at the beginning of the development process?
Solution: FMEA conducts AI risk assessment in a systematic and qualitative way to identify and evaluate potential RAI risks [1]. This bottom-up approach allows the development team to gain a comprehensive understanding of the potential failure modes, their causes, and the impacts of the failures on the systems and their users. FMEA can provide a clear view on the mitigation actions to reduce occurrence frequency and impact and increase detection probability. When applying FMEA, it is essential to consider not only technical failures but also ethical failures that may lead to ethical dilemmas.
Benefits:
- Improved ethical quality: FMEA ensures that ethical failures are prevented from occurring in the first place by thoroughly analyzing all possible ethical risks.
- Ease of use: FMEA is relatively easy to use in practice.
- Early identification: FMEA provides early identification of ethical failures and helps to avoid delays to schedules.
Drawbacks:
- Limited by expertise: FMEA replies on experts to apply their professional knowledge and experience to the RAI risk assessment process. Thus, the quality of the analysis is limited by the expertise of the team performing the analysis.
- Missing failures: FMEA is better suited for bottom-up analysis and not able to detect complex system-level ethical failures that require a holistic perspective
Related patterns:
- RAI Risk Assessment: FMEA is a method of RAI risk assessment focusing on the development process and product design.
- Fault Tree Analysis (FTA): FTA assesses each of the possible ethical failures, while FMEA focuses on root causes that may lead to failures.
Known uses:
- FMEA was originally proposed in US Armed Forces Military Procedures document MIL-P-1629 in 1949 and revised in MIL-STD-1629A in 1980.
- Ford Motor Company firstly introduced FMEA to the automotive industry for assessing safety risk since mid 1970s.
- FMEA has been extended and adopted by Toyota’s Design Review Based on Failure Modes (DRBFM) for assessing potential risk and reliability for Automotive and Non-Automotive applications.
References:
[1] Ebert, C. and M. Weyrich, Validation of Autonomous Systems. IEEE Software, 2019. 36(5): p. 15-23.