Continuous RAI Validator

Summary: A continuous RAI validator continuously monitors and validates the outcomes of AI systems (e.g., the path recommended by the navigation system) against the RAI requirements.

Type of pattern: Product pattern

Type of objective: Trustworthiness

Target users: Architects, developers

Impacted stakeholders: Operators, data scientists

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: AI systems are complex and have dynamic data sources at execution time. Such data sources might be unknown at design time when the training data is collected for the AI components. The AI components may require continual learning based on the new data collected at execution time and thus have a high degree of risk due to the autonomy of the AI components.

Problem: How can we ensure that AI systems are compliant with AI ethics regulations and standards during the execution time of AI systems?

Solution: The AI components of an AI system often require continual learning based on new data collected during operation of the AI system. AI systems have a high degree of risk that is caused by the autonomy of the AI components. It is critical to assess the ethical risks before operating AI systems and to continuously assess the ethical risks at execution time. As shown in Figure 1, a continuous ethical validator deployed in an AI system continuously monitors and validates the outcomes of AI components (e.g., the path recommended by the navigation system) against the RAI requirements.

 

The outcomes of AI systems are the consequences of decisions and behaviors of those AI systems— that is, whether the AI systems provide the intended benefits and behave appropriately given the situation. The time and frequency of validation can be configured. Version-based feedback and rebuild alerts are sent when the predefined conditions regarding the RAI requirement are met.

Fig.1 Continuous RAI validator

Benefits:

  • Increased maintainability: At runtime, a continuous RAI validator allows rebuilding to be triggered if the ethical requirement is not fulfilled under a particular situation.

Drawbacks:

  • Suitability to all RAI risks: It is difficult to validate the output of AI component against the RAI risks that are hard to quantify.

Related patterns:

  • Incentive registry: Incentive registry can be applied with continuous RAI validator to reward/punish the ethical/unethical behavior or decisions of AI systems.
  • RAI knowledge base: RAI knowledge base could be the input of continuous RAI validator.
  • Verifiable RAI requirement: The ethical requirements need to be continuously monitored and validated at runtime.

Known uses:

  • AWS SageMaker model monitor continuously monitors the bias drift of the AI models in production.
  • Qualdo is an AI monitoring solution that monitors data quality and model drift.
  • Azure machine learning uses Azure monitor to create monitoring data. Azure monitor is a full stack monitoring service.
  • OpenAI uses ChatGPT generated synthetic data for close-domain hallucination at the model level. ChatGPT itself can reliably evaluate the hallucination if the user put the response back to it. At the system-level, ChatGPT employs post-deployment monitoring, emergent feedback loops, combining with other technologies/tools (e.g., literature search), avoiding systematic risks by model diversity, testing for dangerous emergent behaviors (e.g., self-replication, power-seeking, avoiding termination, long-term planning and working on independent goals that are not specified or trained for), testing for capability jump caused by smart prompt engineering (e.g., chain of thoughts, few shots prompts), via usage policies (e.g., prohibit the use of ChatGPT in the contexts of high-risk government decision-making).