Overview

AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups who may be treated differentially by the AI system. Teams should consider the entire decision-making process not only the algorithm in isolation. Even if an algorithm satisfies the criteria of fair or accurate and is deemed not risky, it still can have downstream consequences when deployed with human interactions. Software tools monitoring system behaviour should be complemented by teams who can assess and respond to impacted stakeholders.

Detailed policies and procedures on how to handle system output and behaviour should be developed and followed. Observed deviations from goals should trigger feedback loops and subsequent adjustments to data curation and problem formulation in the model, followed by further continuous testing and evaluation. 

Artificial Intelligence Ecosystem process diagram

A process diagram showing the application of Human, Data, Process, System and Governance elements to Diversity and Inclusion in Artificial Intelligence.

Artificial Intelligence Ecosystem process diagram