Verifiable Claim for AI System Artifacts

Summary: A verifiable claim is a statement that supports developers in making an AI system’s ethical properties publicly verifiable and enables users to conduct the verification process.

Type of pattern: Governance pattern

Type of objective: Trustworthiness

Target users: Project managers

Impacted stakeholders: Development teams, AI users

Lifecycle stages: All stages

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act.

Context: The increasing use of AI in promising applications, such as autonomous vehicles and healthcare, is having a significant impact on our lives. However, despite the potential benefits of these systems, there is skepticism about the impact of AI on humans and society. The complexity and black box nature of AI systems raise concerns about their reliability, fairness, privacy, and other ethical considerations. As a result, AI companies are facing challenges in gaining market acceptance, because users may lack trust in these systems. The potential users of AI systems need methods for assessing the RAI qualities of AI systems and comparing them to other systems.

Problem: How can users assess the RAI qualities of an AI system and compare it to other systems?

Solution: A verifiable claim platform can be built to support developers in making ethical properties publicly verifiable and to help users in conducting the verification process. Such platforms should consider the different perspectives of the stakeholder, because developers might focus on reliability, whereas users might be more interested in fairness. A verifiable claim is a statement about an AI system or an artifact (such as model or dataset) that is substantiated by a verification mechanism. For example, auditors could issue certificates for systems, components, or models. Issue tracking systems allow users and developers to flag issues or provide experience reports. Stakeholders could directly investigate an AI system’s ethical properties and obtain insights into its decision-making process
through analysis tools. The platform itself provides management capabilities such as claim creation and verification, access control, and dispute management.

Benefits:

  • Trust: By providing a transparent way to assess the ethical properties of AI systems, verifiable claims can contribute to building trust and facilitating AI adoption.
  • Verification of ethical properties: Verifiable claims enable the verification of ethical properties of AI systems and their artifacts for AI users. This makes it possible for users to make more informed decisions on how and when to use AI.

Drawbacks:

  • Increased cost: Building verifiable claim platforms can be complex and costly.
  • Reliance on third-party: Some verifiable claims may rely on third parties for the generation and verification process, which might not be always trustworthy.

Related patterns:

  • RAI certification: At the organization-level, third-party auditors could issue certificates for AI systems that are available for verification.

Known uses: