Verifiable RAI Credential
Summary: To improve human trust in AI systems, trusted authorities can issue verifiable RAI credentials, and users or AI systems can verify them. Such verifiable data offers the proof evidence of ethical compliance for (1) AI systems, components, and models; (2) developers, operators, users, and organizations; and (3) development processes.
Type of pattern: Product pattern
Type of objective: Trust
Target users: Architects, developers
Impacted stakeholders: Development teams, RAI governors, AI users, AI consumers
Relevant AI principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.
Context: An AI system consists of AI components and non-AI components that are interconnected and work together to achieve the system’s objective. Compared with traditional software, AI systems have a higher degree of uncertainty and risks associated with the autonomy of AI components. Building trust in AI systems might unlock the market and increase adoption of AI systems. The operation of AI systems is in an ecosystem with multiple stakeholders. Trust is the subjective perception of different stakeholders where they believe using the AI systems could improve the performance of their work [1].
Problem: Trust of different stakeholders within the ecosystem of an AI system is important to the efficient functioning of the AI system. The trust toward an AI system covers various aspects of the system, including the hardware, the execution environment, the software components, the AI models, and the operators who operate the AI system. Trust is a subjective perception where stakeholders interact with the AI system. How can we improve the perceived trust of different stakeholders when they do not have a prior trust relationship with the AI system and/or the operators operating the AI system?
Solutions: Verifiable RAI credentials can be used as evidence of ethical compliance for AI systems, components, models, developers, operators, users, organizations, and development processes of the AI systems. Verifiable credentials are data that could be cryptographically verified and be presented with strong proofs. A publicly accessible data infrastructure needs to be built to support the generation and verification of the ethical credentials on a neutral data platform.
A conceptual overview of verifiable credentials is given in Figure 1, which demonstrates the main roles and their relationships in credential verification. The credential holder could be a digital asset, like the AI system, or a component of an AI system, or a human, like a user or a developer. The credential is issued by a trusted authority (issuer), like a government agency or a leading company in industry. A credential is a verifiable claim that includes a piece of fact that is attested to and digitally signed by the issuer about the holder15 (for example, the evidence that a component of an AI system complies with an AI regulation or the evidence that a person is allowed to operate an AI system). Anyone who trusts the issuer could be a verifier of the claim. A verifier requests a specific credential and verifies the validity of the credential via the issuer’s signature.
In the context of AI systems, various RAI credentials are issued by different authorities. Before using AI systems, users may verify the systems’ ethical credentials to check whether the systems are compliant with AI ethics principles or regulations.16 Alternatively, the users may be required to provide the ethical credentials to use and operate the AI systems—for example, to ensure the flight safety of drones.
Benefits:
- Increased trust: A verifiable credential increases user trust toward an AI system through conferring the trust that the user has with the authority that issues the credential to AI systems, the organizations that develop AI systems, and the operators that operate AI systems. Such a transitive trust relationship is critical in the efficient functioning of the AI system.
- AI system adoption: With a RAI credential, an AI system could present proof of compliance as an incentive to the users, thus increase AI adoption.
- Awareness of RAI issues: Verifying ethical credential requires interaction between the user and AI system, which helps to increase the awareness of AI ethical issues.
Drawbacks:
- Set-once-and-forget: Can be set-once-and-forget for organizations and processes.
- Human-in the loop: Human interruption is needed to verify the RAl credential.
- Interoperability: Different authorities may use different forms or techniques for verifiable credentials. Standards could help achieve interoperability.
- Authenticity: RAI credentials may be forged, which makes the verification of authenticity of the RAI credentials becomes challenging.
Related patterns:
- RAI bill of materials registry: Verifiable RAI credential could be applied with RAI bill of materials to provide proof at every point of the supply chain.
- RAI bill of materials: RAI bill of materials can be provided with verifiable RAI credentials for proof of responsibility at a point of the supply chain.
- RAI construction with reuse: To ensure the ethical quality, RAI credentials can be bound with the AI assets or developers, which can also be supported by blockchain platforms.
Known uses:
- Azure Active Directory Verifiable Credentials is a solution of decentralized ID management that compliance with W3C standard.
- Malta AI-ITA certification is the world’s first national AI certification scheme for AI systems to be developed in a responsible manner.
References:
[1] Davis, F.D., Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 1989: p. 319-340.
[2] W3C. Verifiable Credentials Data Model v1.1. 2022, W3C.
[3] Chu, W. A Decentralized Approach Towards Responsible AI in Social Ecosystems. arXiv preprint arXiv:2102.06362, 2021.