Continuous Deployment for RAI
Summary: New versions of AI systems can be seamlessly deployed into production environments by utilizing various deployment strategies that ensure fulfillment of RAI requirements.
Type of pattern: Process pattern
Type of objective: Trustworthiness
Target users: Operators
Impacted stakeholders: Developers, AI users, AI consumers
Lifecycle stages: Operation
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.
Context: AI systems are often required to evolve frequently due to their dependence on data. Because the ethical performance of AI models may degrade over time, they need to be retrained with new data or features, and reintegrated into the AI components. The non-AI components may also need to be upgraded. As a result, it is necessary to continuously and frequently deploy new versions of AI systems into production environments. However, the autonomy of AI systems introduces a higher degree of uncertainty and risk. To mitigate the risks, various deployment strategies that support continuous deployment are highly desirable.
Problem: How can we ensure that new versions of AI systems are seamlessly deployed to production environments?
Solution: Various deployment strategies for AI systems can be used to ensure seamless deployment to production environments. Phased deployment refers to the process of initially deploying AI systems to a sub-group of users with the goal of reducing ethical risk. The new version of AI systems is rolled out incrementally and runs alongside the old version. Phased deployment also can be used to better supervise and control automation. This deployment is usually dependent on the potential consequences of the situation and the level of trust that users may have in the automated decisions made by the AI systems. Another strategy is A/B testing deployment, which is commonly used in industry. This type of deployment involves deploying different versions of the AI model to production and comparing their performance. The model that performs best in terms of ethical performance is selected. Additionally, existing practices, such as redundancy, can also be applied to AI components in an AI system, where multiple AI models work independently to improve ethical performance.
Benefits:
- Reduced ethical risk: When small changes are frequently deployed with various deployment strategies, it is easier to identify and address ethical risks early on, which can lead to improved ethical quality of the overall system.
- Improved customer satisfaction: Continuous deployment with various strategies allows for faster delivery of new models with better ethical quality, resulting in improved customer satisfaction.
Drawbacks:
- Increased complexity: Frequent deployment using different deployment strategies can make it more complex to keep track of the changes made and their potential impact on the overall system.
- Reduced monitorability: Monitoring can be challenging, especially if the AI system is large and complex.
Related patterns:
- Both Multi-model decision maker and Homogenous redundancy apply redundancy as deployment strategies at different levels, i.e., AI models and components respectively.
Known uses:
- Sato et. al. summarize various deployment strategies for machine learning applications.
- Amazon SageMarker provides services for training and deploying machine learning models.
- Microsoft Azure Machine Learning is a platform for automating the machine learning lifecycle including deployment.