An AI system is a computer-based system that for a given set of human-defined objectives, typically uses large historical datasets to make predictions, recommendations, or decisions for human consumption that may influence real or virtual environments. There are many techniques and methods for verifying, validating, and monitoring AI systems (e.g. testing, algorithmic analysis of models etc), against diversity and inclusion in AI principles. AI systems must be evaluated, tested, and monitored in the context of their use to ensure non-inclusive behaviors are identified and fixed during AI system evolution. Non-adherence to practices of diversity and inclusion in the building, deployment, and use of AI systems have been shown to cause digital redlining, discrimination, and algorithmic oppression, leading to AI systems being perceived as untrustworthy and unfair.

S04Evaluate, adjust, and document bias identification and mitigation measures

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment. 

S03Employ model design techniques attuned to diversity and inclusion considerations 

Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Model design techniques are necessarily contextual, related to the type of AI technology, the purpose and scope of the system, how users will be impacted, and so forth. However, basic steps should include incorporating input […]

S02Understand AI systems through a holistic lens

Code is not the right level of abstraction at which to understand AI systems, whether is for accountability or adaptability. Instead, systems should be analyzed in terms of inputs and outputs, overall design, embedded values, and how the software system fits with the overall institution deploying it. 

S01Establish inclusive and informed product development, training, evaluation, and sign-off

New stakeholders for iterative rounds of product development, training, and testing should be brought in, and beta groups for test deployments should be recruited. User groups should reflect different needs and abilities. Fresh perspectives contribute to the evaluation of both the AI system’s functionality and, importantly, its level and quality of inclusivity. New or emergent […]

Artificial Intelligence Ecosystem process diagram

A process diagram showing the application of Human, Data, Process, System and Governance elements to Diversity and Inclusion in Artificial Intelligence.

Artificial Intelligence Ecosystem process diagram