Local Explainer
Summary: A local explainer provides explanations on an instance-level basis for individual input data to understand the feature importance and correlations in relation to the output predictions
Type of pattern: Product pattern
Type of objective: Trust
Target users: Data scientists
Impacted stakeholders: UX/UI designer, RAI governors, AI users, AI consumers
Lifecycle stages: Design
Relevant AI ethics principles: Explainability
Context: Despite the widespread adoption of AI, the models in AI systems remain opaque to users. Without trust in the AI systems, the users may be hesitant to take actions based on its recommendations.
Problem: How can a user understand an individual prediction by an AI system?
Solution: One way to understand individual decisions made by an AI system is through the use of a local explainer. A local explainer provides explanations for each input data instance, which can help users understand the feature importance and correlations that led to the specific output predictions. Two well-known local explainer algorithms are Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP). LIME explains a black box model by determining the contribution of each feature to the decision output for a specific input. On the other hand, SHAP provides local explanations by comparing the decision outputs of the model when a feature is included versus excluded.
Benefits:
- Trust: A local explainer provides a way to understand the reasoning behind a specific decision, making the model more transparent and easier to trust.
- Correctness: By providing the information about the feature importance and correlations that resulted in a decision, a local explainer can give insight into the inner workings of a model, which can help identify error or biases in the model.
Drawbacks:
- Lack of global visibility: Local explanations do not explain the general behaviors of AI models.
- Complexity: A local explainer can be computationally intensive and may not scale well to complex models or large datasets.
- Limited applicability: Local explanations are not suitable for all types of models or use cases; for example, they work well with linear models and tabular data but may not be suitable for image or text data.
Related Patterns:
- Global explainer: The focus of global explanations is on the whole AI model, while local explanations only consider an individual decision.
- XAI interface : Human-computer interaction aspects and psychological requirements can be incorporated into the explanation interface design to allow AI users to understand and trust AI systems.
Known Uses:
- IBM AI Explainability 360 is a toolkit that contains ten explainability methods and two evaluation metrics for understanding data and AI models. The explainability methods support five types of methods, including data explanations, directly interpretable, self-explaining, global post-hoc, and local post-hoc.
- Microsoft InterpretML is a python toolkit that includes XAI techniques developed by Microsoft and third parties to explain AI model’s overall behavior and the reasons behind the individual decisions.
- Google Vertex Explainable AI provides XAI support for tabular and image data and helps users learn how each feature in the data contributed to the decision result.