Multi-Model Decision-Maker
Summary: To improve the reliability of the AI component, multi-model decision-maker is used to employ different AI models to perform the same task or enable a single decision.
Type of pattern: Product pattern
Type of objective: Trustworthiness
Target users: Architects, developers, data scientists
Impacted stakeholders: AI users, AI consumers
Relevant AI ethics principles: Reliability and safety, fairness
Mapping to AI regulations/standards: ISO/IEC 42001:2023 Standard.
Context: It is widely recognized that the performance of an AI model may vary in different context given its data-dependent behavior. Thus, reliability of AI system largely depends on the method to characterize the impact of the reliability of the AI component.
Problem: How to ensure reliability of an AI system under different context?
Solution: In the software reliability community, traditional architecture-based software reliability is based on software components. Existing reliability practices, like redundancy, are also applicable to AI components in an AI system. In addition, a reasonable combination of multiple AI models that normally work independently could improve the performance (e.g., accuracy) of the AI components.
As demonstrated in Figure 1, a multi-model decision-maker employs different models to perform the same task or enable a single decision (e.g., deploying different algorithms for visual perception). It improves the reliability by deploying different models under different contexts (e.g., different geolocation regions) and enabling fault tolerance by cross-validating ethical requirements for a single decision [1]. Different consensus protocols could be defined to make the final decision—for example, taking the majority decision. Another strategy is to accept only the same results from the employed models. In addition, the end user or the operator could step in to review the output from the multiple models and make a final decision based on human expertise.
Benefits:
- Increased reliability: A multi-model decision-maker relies on the output of multiple AI models, which enables cross-validation among different AI models and fault tolerance of the AI component.
- Fairness: Multiple AI models could be applied to cover different contexts and make a collective and fair decision.
Drawbacks:
- Increased development effort: The development effort is proportional to the number of AI models used by the multi-model decision-maker. The more AI models involved, the more development effort required.
- More required skills: Training multiple AI models requires more skills and expertise compared with training a single AI model.
- Decreased training efficiency: It may take longer to train multiple AI models and reach a consensus among the AI models.
Related patterns:
- Homogeneous redundancy: Both multi-model decision maker and homogenous redundancy are instantiations of the widely used reliability practice for software system. They are applications at different abstraction levels of AI systems.
- Continuous deployment for RAI: Multiple models can be deployed to make decisions.
Known uses:
- Scikit-learn is a Python package that supports using multiple learning algorithms to obtain better performance through ensemble learning.
- AWS fraud detection using machine learning solution trains an unsupervised anomaly detection model in addition to a supervised model, to augment the prediction results.
- IBM Watson natural language understanding uses an ensemble learning framework to include predictions from multiple emotion detection models.
References:
[1] Dai, J., et al. More reliable AI solution: Breast ultrasound diagnosis using multi-AI combination. arXiv preprint arXiv:2101.02639, 2021.