XAI Interface

Summary: Explainable AI (XAI) can be viewed as a human-AI interaction problem and achieved through human-centered interface design.

Type of pattern: Process pattern

Type of objective: Trustworthiness

Target users: Data scientists, UX/UI designers

Impacted stakeholders: Developers

Lifecycle stages: Design

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: ISO/IEC 42001:2023 Standard.

Context: AI system users often lack understanding of how decisions are made by these systems and are unaware of their capabilities or limitations. The missing explainability can lead to a lack of trust in these systems, and this has been recognized as one of the most pressing challenges that need to be addressed.

Problem: How can AI users comprehend the decisions and behaviors of AI systems?

Solution: XAI can be considered as a human-AI interaction problem and achieved through human-centered interface design. One common approach to designing explainable user interfaces is using checklists or questions. These lists can assist in identifying user needs, choosing appropriate XAI techniques (such as rule-based explanations and feature attribution), and considering relevant XAI design factors.


  • Increased trust: When clear explanations are provided to clarify how AI systems make decisions, users are better able to understand the capabilities and limitations of the AI technology and are more likely to trust and adopt the technology.
  • Reduced biases: Explanations can help users identify and address biases in the AI systems.


  • Limited by users’ background: Users may not understand explanations with too many technical details. Explanations should be given in terms familiar to users.
  • Inefficiency: There is no need to explain when the users are aware of the ethical risk.

Related patterns:

  • Local Explainer: Integrating local explanations into the XAI interface enhances the transparency of decision-making by providing a rationale for how and why a data instance was given a decision.
  • Global Explainer: Incorporating global explanations into the interface design allows AI users with a better understanding of the global behaviors of AI systems.
  • RAI digital twin: A RAI digital twin performs system-level simulation at run-time using real-time data. The simulation results are sent back to alert the system or user via XAI interfaces before the unethical behavior or decision takes effect.

Known uses:

  • Liao et. al. summarize a checklist of questions on input, output, how, performance (can be extended to ethical performance), why and why not, what if, etc.
  • The design of conversational interfaces can be experimented via a Wizard of Oz study, in which users interact with a system that they believe to be autonomous but is actually being operated by a hidden human, called the Wizard. The conversation data is collected and analyzed to understand requirements for a self-explanatory conversational interface.
  • Luxton recommends to use anthropomorphism in the user interface design to increase human trust.