The development process describes all the activities and tasks that are carried out to deliver an AI system for a specific context of use. This definition divides the process into three sub-processes: pre-development, during development, and postdevelopment.
  • The Pre-development phase refers to the ideation of a use case or a problem that the AI system is intended to address. It also includes clearly defining the use case or the problem and the rationale behind the application of AI solution as well as identifying relevant stakeholders and eliciting their requirements.
  • During Development refers to the team partnering with stakeholders to work on data collection and preparation, model design and development, and testing and evaluation of the AI system iteratively and incrementally.
  • Post-development refers to the deployment of the AI system in the context of its use, monitoring its performance, safety, reliability, and trustworthiness during use, as well as making changes as necessary during the AI system lifecycle.
Diversity and inclusion principles should be carefully considered and embedded throughout the entire AI system development process.

Pre-Development

P04Consider specific categories relevant to the AI system 

For example, before embedding gender classification into a facial analysis service or incorporating gender into image labelling, it is important to consider what purpose gender is serving. Furthermore, it is important to consider how gender will be defined, and whether that perspective is unnecessarily exclusionary (for example, non-binary). Therefore, stakeholders involved in the development of […]

P03Identify possible systemic problems of bias and appoint a steward

At the start of the Pre-Design stage, stakeholders should identify possible systemic problems of bias such as racism, sexism, or ageism that have implications for diversity and inclusion. Main decision-makers and power holders should be identified, as this can reflect systemic biases and limited viewpoints within the organisation.  A sole person responsible for algorithmic bias ̶ […]

P02Establish mechanisms for monitoring and improvement

Mechanisms enabling an iterative process of continuous monitoring and improvement of diversity and inclusion considerations should be established from the outset. These will help ensure that all stakeholders’ needs are met, and that inadvertent harm is not caused. Both team and system performance should be regularly assessed, improvements identified, and changes executed accordingly.

P01Practice inclusive problem identification and impact assessment

A project owner (individual or organisation) with suitable expertise and resources to manage an AI system project should be identified, ensuring that accountability mechanisms to counter potential harm are built in. It should be decided which other stakeholders will be involved in the system’s development and regulation. Both intended and unintended impacts that the AI […]

Development

P13Establish diverse partnerships and training populations 

Partner with ethicists and antiracism experts in developing, training, testing, and implementing models. Recruit diverse and representative populations in training samples.

P12Assess the suitability of Human-centered design (HCD) methodology for AI system development

A Human-centered design (HCD) methodology, based on International Organization for Standardization (ISO) standard 9241-210:2019, for the development of AI systems, could comprise:  • Defining the Context of Use, including operational environment, user characteristics, tasks, and social environment; • • Determining the User & Organizational Requirements, including business requirements, user requirements, and technical requirements; • • […]

P11Apply fairness analysis throughout the development process

Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]

P10Construct evaluation tasks that best mirror the real-world setting

Evaluation, even on crowdsourcing platforms used by ordinary people, should capture end users’ types of interactions and decisions. The evaluations should demonstrate what happens when the algorithm is integrated into a human decision-making process. Does that alter or improve the decision and the resultant decision-making process as revealed by the downstream outcome?

P09Follow holistic Value Sensitive Design principles and methodology

Teams should engage with the complexity in which people experience values and technology in daily life. Values should be understood holistically and as being interrelated, rather than being analyzed in isolation from one another.

P08Create effective validation processes

Subject matter experts should create and oversee effective validation processes addressing bias-related challenges including noisy labelling (for example, mislabeled samples in training data), use of proxy variables, and performing system tests under optimal conditions unrepresentative of real-world deployment context.

P07Evaluate, adjust, and document bias identification and mitigation measures

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment. 

P06Employ model designs attuned to diversity and inclusion

Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Basic steps should include incorporating input from diverse stakeholder cohorts, ensuring the development team embodies different kinds of diversity, establishing and reviewing metrics to capture diversity and inclusion elements throughout the AI-LC, and ensuring well-documented […]

P05Consider multiple trade-offs

In the design stage, decisions should weigh the social-technical implications of the multiple trade-offs inherent in AI systems. These trade-offs include the system’s predictive accuracy which is measured by several metrics. The metrics include accuracies within sub-populations or across different use cases, as partial and total accuracies. Fairness outcomes for different sub-groups of people the […]

Post-Development

P18Collect demographic data from users to aid bias monitoring

Monitoring for bias should collect demographic data from users including age and gender identity to enable the calculation of assessment measures. 

P17Test and evaluate bias characteristics during deployment

The deploying organisation and other stakeholders should use documented model specifications to test and evaluate bias characteristics during deployment in the specific context. 

P16Monitor and audit changing AI system impacts

It is critical to monitor the use of advanced analytics and AI technology to ensure that benefits are accruing to diverse groups in an equitable manner. The scale of AI system impact can change rapidly and unevenly when deployed. Organisations should build resilience, flexibility, and sensitivity to respond to changes to ensure equitable and inclusive outcomes. 

P15Undertake holistic monitoring of external impacts 

AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]

P14Monitor and evaluate during deployment

New or emergent stakeholder cohorts should participate in system monitoring and retraining. Stakeholders should be involved in a final review and sign-off, particularly if their input propelled significant changes in design or development processes. After validation, teams should obtain informed consent on the developed product features from impacted stakeholders, to track and respond to the […]

Artificial Intelligence Ecosystem process diagram

A process diagram showing the application of Human, Data, Process, System and Governance elements to Diversity and Inclusion in Artificial Intelligence.

Artificial Intelligence Ecosystem process diagram