RAI Certification
Summary: RAI certification can be used as the attestation that an AI entity (i.e., system, component, development process, developer, operator, or organization) has met certain specified criteria, such as mandatory regulatory requirements or voluntary AI ethics principles.
Type of pattern: Governance pattern
Type of objective: Trust
Target users: RAI governors
Impacted stakeholders: AI technology producers and procurers, AI solution producers and procurers, RAI tool producers and procurers
Lifecycle stages: All stages
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act.
Context: AI is a high-stakes technology that faces challenges in gaining societal acceptance and permission to operate. Building trust in AI could help unlock the market for AI technology and increase its adoption. Trustworthiness refers to the ability of an AI system to adhere to AI laws, regulations, and ethics principles, while trust is the subjective estimate of a stakeholder regarding the trustworthiness of AI systems. It is important to note that even if an AI system is deemed trustworthy, this does not necessarily mean the stakeholders automatically trust it.
Problem: How can we assess and verify the responsible practices of AI entities?
Solution: Trust can be improved by providing stakeholders with evidence of compliance with laws and standard in the context of the application of the AI. To provide such evidence, certification can be designed to recognize that an organization or individual has the ability to develop or use an AI system in a responsible manner, or that the development process or design of an AI system or component is compliant with standards or regulations. To obtain certification, a trusted third party typically conducts an assessment. If the assessment results show that the entity (i.e., system, component, development process, developer, operator, or organization) meets the specified criteria, it is granted certification. This certification serves as evidence that the entity has met the necessary standards and requirements for responsible AI practices.
Benefits:
- Improved trust: RAI certification can help improve trust in AI systems by providing evidence of ethical compliance.
- Accelerated AI adoption: RAI certificates can be used as proof of compliance, which can accelerate the adoption of AI systems.
- Implementation of AI ethics principles: Obtaining RAI certification can incentivize the stakeholders to adhere to AI ethics principles in order to meet the requirements for certification.
Drawbacks:
- Forgery: RAI certificates may be forged, making it difficult to verify their authenticity.
- Complexity: The certification process can be complex, costly and time-consuming. This may be a barrier for some organizations or individuals seeking RAI certification.
- Lack of standardization: There can be multiple RAI certification programs, which may lead to inconsistency.
- Untrusted certification authority: There is a risk of not having trusted certification authorities to manage the certification process. This can result in a lack of confidence in the authenticity and effectiveness of the certification.
Related patterns:
- RAI maturity model: RAI certification can use the RAI maturity model as the framework for assessing an organization’s level of preparedness for implementing AI.
- Trust mark: Trust mark is a special form of RAI certification, which can be easily attached on the AI products.
- RAI standard: RAI certification can be adopted to recognize the development process or design of an AI system is compliant with RAI standards.
- RAI training: RAI training can be provided to individuals in order to develop the skills necessary to obtain RAI certification.
- Verifiable Claim for AI System Artifacts: RAI certificates can be designed and verified in the form of verifiable claim.
Known uses
- Malta AI-ITA certification is the world’s first national AI certification scheme for AI systems to be developed in a responsible manner.
- DO-178C, Software Considerations in Airborne Systems and Equipment Certification is used to approve commercial software-based aerospace systems.
- Queen’s University offers an executive education program on Principles of AI Implementation.
- CertifyAI provides third-party certification to AI solutions across four distinct levels: basic, silver, gold, and platinum.
- Responsible AI Institute provides an independent certification program for AI systems to demonstrate alignment with responsible AI requirements.