AI Mode Switcher

Summary: Adding an AI mode switcher to the AI system offers users efficient invocation and dismissal mechanisms for activating or deactivating the AI component when needed.

Type of pattern: Product pattern

Type of objective: Trustworthiness, trust

Target users: Architects, developers

Impacted stakeholders: Operators, AI users, AI consumers

Relevant AI ethics principles: Human-centered values, privacy protection and security, contestability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: Human autonomy is an individual’s capacity for self-determination or self-governance, which should be supported in AI systems.

Problem: How to enable human autonomy through allowing users efficiently activating and deactivating the AI component when needed?

Solution: When to use AI at decision-making points can be a major architectural design decision when designing an AI system. In Figure 1, adding an AI mode switcher to the AI system offers users efficient invocation and dismissal mechanisms for activating and deactivating the AI component whenever needed, thus deferring the architectural design decision to the execution time that the end user or the operator of the AI system decides. The AI mode switcher is like a kill switch for the AI system that could immediately shut down the AI component and thus stop its negative effects (e.g., turning off the automated driving system and disconnecting it from the internet).


The decisions made by the AI component can be executed automatically or reviewed by a human expert before being executed in critical situations. The purpose of the human expert is to approve or override the decisions (e.g., skipping the path generated by the navigation system). Human intervention can also happen after executing the AI decision through the fallback mechanism that reverses the system back to the state it was in before executing the AI decision. A built-in guard can be used to ensure that the AI component is activated only within the predefined conditions (such as domain of use, boundaries of competence). The end users or the operators can ask questions or report complaints/failures/near misses through a recourse channel after observing a bad decision from an AI component.

Fig.1 AI mode switcher


  • Increased trust: AI mode switcher gives the users the choice to switch off AI model when the user does not trust the decision or recommendation provided by the AI component, thus increases the trust towards the AI system.
  • Contestability and autonomy: AI mode switcher enables human autonomy through allowing the end users to switch off AI component or override the decisions made by AI component at any runtime.


  • Efficiency: Efficiency and performance of the decision-making points highly depends on the quality of other non-AI components involved.
  • Suitability to (near) real time systems: The use of an AI mode switcher in a (near) real-time system might be problematic. The performance of the system might be affected if the end user or the operator of the AI system keeps switching the AI component on and off.

Related patterns:

  • RAI sandbox: AI mode switcher could work with ethical sandbox to react to a predicted ethical risk.
  • RAI digital twin: AI mode switcher could work with ethical digital twin. When the ethical digital twin predicts a potential ethical risk, it sends an alert to the user. The user may decide to switch off the AI component using AI mode switcher.
  • AI suitability assessment: The result of AI suitability assessment may affect the design of AI mode switcher.
  • RAI design modelling: AI mode switcher can be applied to trigger the state transition and change the system state to a safe state.
  • Human-AI interaction patterns: Human-AI interaction patterns could work with AI mode switcher to give users freedom to decide whether use AI features or not.

Known uses:

  • Tesla autopilot has multiple driver assistance features that can be enabled or disabled during the driving. Users maintain control of the vehicles and can override the operations by these features at runtime.
  • Waymo operates self-driving cars with an automated driving system that can be overridden by human safety drivers.
  • Baidu autonomous mini-bus requires a staff in the seat to supervise the self-driving operations, and the bus can be switched to manual driving mode by braking.