Global view auditor

Summary: A global-view auditor provides global-view accountability by finding discrepancies among the data collected from multiple AI components or AI systems and identifying liability when negative events occur.

Type of pattern: Product pattern

Type of objective: Trust

Target users: Architects, developers

Impacted stakeholders: RAI governors, AI users, AI consumers

Relevant AI principles: Human, societal and environmental wellbeing, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: When an accident happens, more than one AI system or multiple AI components within an AI system might be involved (e.g., multiple autonomous vehicles in an accident). The data collected from each involved AI system/component might conflict with each other since the individual AI system/component may have its own perception.

Problem: How can we identify the liability when accidents occur that involve multiple AI components/systems with different perceptions?

Solution: As shown in Figure 1, a global-view auditor is a component that collects information from multiple AI components/systems and processes the information to identify discrepancies among the information collected. Based on the results, the global-view auditor may alert the AI component/system with a wrong perception, thus avoiding negative impacts or identifying liability when negative events occur.


This pattern can be also used to improve decision-making of an AI system by taking the knowledge from other AI components/systems. For example, an autonomous vehicle may increase its visibility by using the perceptions of others to make better decisions at runtime [1].

Fig. 1 Global view auditor


  • Accountability: A global-view auditor enables accountability that covers different perceptions of AI components/systems that are involved. The global-view auditor redresses the conflicting information collected from multiple AI components/systems.
  • Traceability: Similar to the preceding description, a global-view auditor collects information from all the AI components/systems that get involved.


  • Performance: It takes longer time to process information collected from multiple places which is potentially conflicting.

Related patterns:

  • Federated learner: In a decentralized environment of federated learning, global view auditor could be applied.
  • RAI black box: A global view auditor could be applied with RAI block box of multiple AI components/systems to integrate the information collected by the RAI black boxes.

Known uses:

  • FG-AI4H is an audit platform for AI models in healthcare domain.
  • Seclea provides audit tools for identifying the decisions and model behaviors in AI.
  • NVIDIA proposes a scheme for auditing deep learning models through using semantic specifications.


[1] Miguel, B.S., A. Naseer, and H. Inakoshi. Putting accountability of AI systems into practice. in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 2021.