Trust Mark
Summary: A trust mark is a seal that indicates that an AI system has been endorsed as being compliant with RAI standards.
Type of pattern: Governance pattern
Type of objective: Trust
Target users: RAI governors
Impacted stakeholders: AI technology producers and procurers, AI solution producers and procurers, RAI tool producers and procurers
Lifecycle stages: All stages
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act.
Context: As AI technology has been rapidly evolving, it has been incorporated into a wide range of software systems across various domains, such as entertainment and home automation. However, the autonomous and unpredicted nature of AI systems has raised public attention and concerns. Many consumers may not have professional knowledge about AI and may find it difficult to understand the sophisticated algorithms that power the systems. This lack of understanding makes it challenging for consumers to make decisions about the use of AI products and services.
Problem: How can consumers confidently trust the AI systems they use without having professional knowledge about AI?
Solution: One way to improve public confidence in AI and address ethical concerns is to use a trust mark, a visible seal of endorsement that signifies that an AI system meets certain RAI standards. Trust marks can be easily understood by all consumers and can provide assurance that an AI system has been designed and developed in a responsible manner. To ensure that trust marks make sense, it is important to establish agreed-upon standards for AI development and to have these standards reviewed by independent auditors. To be effective, the trust mark should be designed in a way that is easily understood by all consumers, such as through a label or visual representation indicating fulfillment with the trust mark requirements.
Benefits:
- Public confidence: By providing a visible symbol of endorsement, trust mark helps improve consumers confidence in AI systems and address ethical concerns.
- Branding: Trust marks can be particularly valuable for small AI companies that may not be well-known in the market.
- Understandability: Trust marks are designed to be easily understandable by all consumers, including those with limited knowledge about AI.
Drawbacks:
- Lack of awareness: Consumers may not be aware of trust marks or how to identify them when assessing an AI system.
- Lack of trust: Some consumers may not trust that the AI systems with trust marks are necessarily more responsible than those without one.
Related patterns:
- RAI certification: Trust mark can be a simplified form (e.g. stamp) of RAI certification.
Known uses:
- Australian Data and Insights Association (ADIA) Trust Mark ensures that their member organizations are compliant with the ethical standards.
- Privacy Trust Mark is awarded by New Zealand’s Privacy Commissioner to a product or service for the recognition for excellence in privacy.
- Data Protection Trustmark (DPTM) is developed by Singapore’s Personal Data Protection Commission (PDPC) and Info-Communications Media Development Authority (IMDA) to help organizations demonstrate compliance with Personal Data Protection Act (PDPA).