System-level Product Design Patterns
Fig.1 illustrates the diagram of a provisioned AI system and highlights the patterns associating with relevant layers.

Fig. 1 Product patterns for responsible-AI-by-design architecture of an AI system.
Once the AI system starts serving, it can be requested to execute a certain task. Decision-making may be needed before executing the task. Both the behaviours and decision-making outcomes of the AI system are monitored and validated. If the system is failed to meet the requirements (including ethical requirements) or a near-miss is detected, the system need to be updated. The AI system may need to be audited regularly or when major failures / near misses occur. The stakeholders can determine to abandon the AI system if it no longer fulfils the requirements.

Fig. 2 Product patterns for responsible-AI-by-design.
-
Bill of Materials
-
Verifiable Ethical Credential
-
Ethical Digital Twin
-
Ethical Sandbox
-
AI Mode Switcher
-
Multi-Model Decision-Maker
-
Homogeneous Redundancy
-
Incentive Registry
-
Continuous Ethical Validator
-
Ethical Knowledge Base
-
Co-Versioning Registry
-
Federated Learner
-
Ethical Black Box
-
Global-View Auditor