Showing 1 – 10 of 13

December 22, 2023

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.

December 22, 2023

Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Model design techniques are necessarily contextual, related to the type of AI technology, the purpose and scope of the system, how users will be impacted, and so forth. However, basic steps should include incorporating input […]

December 22, 2023

Code is not the right level of abstraction at which to understand AI systems, whether is for accountability or adaptability. Instead, systems should be analyzed in terms of inputs and outputs, overall design, embedded values, and how the software system fits with the overall institution deploying it.

December 22, 2023

The deploying organisation and other stakeholders should use documented model specifications to test and evaluate bias characteristics during deployment in the specific context.

December 22, 2023

A Human-centered design (HCD) methodology, based on International Organization for Standardization (ISO) standard 9241-210:2019, for the development of AI systems, could comprise:  • Defining the Context of Use, including operational environment, user characteristics, tasks, and social environment; • • Determining the User & Organizational Requirements, including business requirements, user requirements, and technical requirements; • • […]

December 22, 2023

Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]

December 22, 2023

Evaluation, even on crowdsourcing platforms used by ordinary people, should capture end users’ types of interactions and decisions. The evaluations should demonstrate what happens when the algorithm is integrated into a human decision-making process. Does that alter or improve the decision and the resultant decision-making process as revealed by the downstream outcome?

December 22, 2023

Subject matter experts should create and oversee effective validation processes addressing bias-related challenges including noisy labelling (for example, mislabeled samples in training data), use of proxy variables, and performing system tests under optimal conditions unrepresentative of real-world deployment context.

December 22, 2023

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.

December 22, 2023

Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Basic steps should include incorporating input from diverse stakeholder cohorts, ensuring the development team embodies different kinds of diversity, establishing and reviewing metrics to capture diversity and inclusion elements throughout the AI-LC, and ensuring well-documented […]