Showing 11 – 20 of 46

December 22, 2023

AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]

December 22, 2023

Partner with ethicists and antiracism experts in developing, training, testing, and implementing models. Recruit diverse and representative populations in training samples.

December 22, 2023

A Human-centered design (HCD) methodology, based on International Organization for Standardization (ISO) standard 9241-210:2019, for the development of AI systems, could comprise:  • Defining the Context of Use, including operational environment, user characteristics, tasks, and social environment; • • Determining the User & Organizational Requirements, including business requirements, user requirements, and technical requirements; • • […]

December 22, 2023

Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]

December 22, 2023

Evaluation, even on crowdsourcing platforms used by ordinary people, should capture end users’ types of interactions and decisions. The evaluations should demonstrate what happens when the algorithm is integrated into a human decision-making process. Does that alter or improve the decision and the resultant decision-making process as revealed by the downstream outcome?

December 22, 2023

Teams should engage with the complexity in which people experience values and technology in daily life. Values should be understood holistically and as being interrelated, rather than being analyzed in isolation from one another.

December 22, 2023

Subject matter experts should create and oversee effective validation processes addressing bias-related challenges including noisy labelling (for example, mislabeled samples in training data), use of proxy variables, and performing system tests under optimal conditions unrepresentative of real-world deployment context.

December 22, 2023

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.

December 22, 2023

Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Basic steps should include incorporating input from diverse stakeholder cohorts, ensuring the development team embodies different kinds of diversity, establishing and reviewing metrics to capture diversity and inclusion elements throughout the AI-LC, and ensuring well-documented […]

December 22, 2023

For example, before embedding gender classification into a facial analysis service or incorporating gender into image labelling, it is important to consider what purpose gender is serving. Furthermore, it is important to consider how gender will be defined, and whether that perspective is unnecessarily exclusionary (for example, non-binary). Therefore, stakeholders involved in the development of […]