Verifiable RAI Requirement
Summary: RAI requirements should be expressed in a verifiable way to make the development of AI systems compliant with AI ethics principles.
Type of pattern: Process pattern
Type of objective: Trustworthiness
Target users: Business analysts
Impacted stakeholders: Developers, data scientists, testers, operators
Lifecycle stages: Requirements
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: ISO/IEC 42001:2023 Standard.
Context: The development of AI systems must be guided by AI ethics principles. These principles are generally abstract and domain-agnostic. RAI requirements should be derived from the AI ethics principles to fit the specific domain and system context. By defining RAI requirements early in the AI system development process, the development team can integrate ethical considerations throughout the entire process.
Problem: How can we verify whether the developed AI systems meet the RAI requirements?
Solution: Every RAI requirement included in a requirements specification document should be clearly and verifiably defined, with specific acceptance criteria. It should be possible for a person or machine to later check the AI system to confirm that it meets the RAI requirements that were derived from the AI ethics principles. The use of vague or unverifiable statements should be avoided. If it is not possible to determine whether the AI system meets a particular RAI requirement, this ethical requirement should be revised or removed from the requirements specification document.
Benefits:
- Reduced ethical risk: When RAI requirements are considered from the beginning of the development process and RAI requirements are explicitly verified, the risk of ethical violations can be reduced.
- Customer expectation: By providing verifiable RAI requirements, the development team is able to ensure that the RAI aspects of the delivered AI system meet the expectation of the customer.
Drawbacks:
- Hard to use for some intangible RAI requirements: Some RAI principles/requirements may not be easily quantitatively validated, such as human-centered values.
- Additional complexity: Creating verifiable RAI requirements can add complexity to the development process, requiring the team to define acceptance criteria.
Related patterns:
- Lifecycle-driven data requirement: Data requirements specification could include a set of verifiable RAI requirements around data.
- RAI user story: RAI user stories can be used to elicit and document verifiable RAI requirements.
- Multi-level co-architecting: All the RAI requirements need to be considered during the architecture design.
- Continuous RAI validator: The RAI requirements need to be continuously monitored and validated at runtime.
Known uses:
- AI ethics principles can be viewed as RAI requirements for the specified functionalities offered by the AI systems or the entities providing the system [1].
- RAI requirements should be specified explicitly as the expected system outputs and outcomes (e.g., intended benefits) in a verifiable manner [2].
- Qualities of machine learning can be specified as non-functional reliability requirements [2].
References:
[1] Zhu, L., et al., AI and Ethics–Operationalising Responsible AI. arXiv preprint arXiv:2105.08867, 2021.
[2] Lu, Q., et al. Software engineering for responsible AI: An empirical study and operationalised patterns. in 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). 2022. IEEE.