Global Explainer
Summary: A global explainer treats an AI model as a whole by using a set of data instances to produce explanations to understand the general behavior of the AI model.
Type of pattern: Product pattern
Type of objective: Trust
Target users: Data scientists
Impacted stakeholders: UX/UI designer, RAI governors, AI users, AI consumers
Lifecycle stages: Design
Relevant AI ethics principles: Explainability
Context: The black box nature of AI systems can be a significant challenge to their adoption and raises a number of ethical and legal concerns. One of the main reasons for this is the complexity of the models used in AI systems, particularly Deep Neural Networks (DNNs), which have a large number of parameters that can make them difficult to understand. This lack of explainability poses a barrier to widespread adoption of AI, because users may be hesitant to trust the suggestions given by AI systems.
Problem: How can we help users understand the general behavior of an AI model?
Solution: A global explainer can help users understand the general behavior of an AI system by using a set of data instances to produce explanations. These explanations provide an overview of the model’s behavior by visualizing the relationship between the input features and the model’s output over a range of values. Global surrogate models, such as tree-based models or rule-based models, can be used to understand the complex AI models because they have inherent explainability, allowing the output decisions to be traced back to their source.
Benefits:
- Better understanding: Global explanations simplify complex AI models by reducing them to linear counterparts, which are easier to understand.
- Improved transparency: Global explanations provide a general understanding of how an AI model behaves, which can help increase transparency and build trust in the AI system.
Drawbacks:
- Limited understandability: Global explanations can be difficult for AI users without technical expertise to understand and provide feedback on.
- Limited specificity: While global explanations provide a general understanding of the model’s behavior, they may lack the specificity required to understand why a specific decision was made for a particular input.
- Lack of accuracy: Global explanations are based on a set of data instances, which can introduce uncertainty and noise, leading to explanations that may not be entirely accurate.
Related Patterns:
- Local explainer: The focus of global explanations is on the whole AI model, while local explanations only consider the individual decision.
- XAI interface: Global explanations can be incorporated into the interface design to allow AI users to understand AI systems’ global behaviors.
Known Uses:
- IBM AI Explainability 360 is a toolkit that contains ten explainability methods and two evaluation metrics for understanding data and AI models. The explainability methods support five types of methods, including data explanations, directly interpretable, self-explaining, global post-hoc, and local post-hoc.
- Microsoft InterpretML is a python toolkit that includes XAI techniques developed by Microsoft and third parties to explain AI model’s overall behavior and the reasons behind the individual decisions.
- EthicalML-XAI provides global explanations by visualizing the behaviors of AI models in terms of input variables.
- tf-explain provides insights to neural networks’ global behaviors by visualizing activations of neurons.