Follow AI risk assessment frameworks
December 22, 2023
Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards. AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]
Collect demographic data from users to aid bias monitoring
December 22, 2023
Monitoring for bias should collect demographic data from users including age and gender identity to enable the calculation of assessment measures.
Test and evaluate bias characteristics during deployment
December 22, 2023
The deploying organisation and other stakeholders should use documented model specifications to test and evaluate bias characteristics during deployment in the specific context.
Monitor and audit changing AI system impacts
December 22, 2023
It is critical to monitor the use of advanced analytics and AI technology to ensure that benefits are accruing to diverse groups in an equitable manner. The scale of AI system impact can change rapidly and unevenly when deployed. Organisations should build resilience, flexibility, and sensitivity to respond to changes to ensure equitable and inclusive outcomes.
Undertake holistic monitoring of external impacts
December 22, 2023
AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]
Apply fairness analysis throughout the development process
December 22, 2023
Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]
Construct evaluation tasks that best mirror the real-world setting
December 22, 2023
Evaluation, even on crowdsourcing platforms used by ordinary people, should capture end users’ types of interactions and decisions. The evaluations should demonstrate what happens when the algorithm is integrated into a human decision-making process. Does that alter or improve the decision and the resultant decision-making process as revealed by the downstream outcome?
Evaluate, adjust, and document bias identification and mitigation measures
December 22, 2023
During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.
Employ model designs attuned to diversity and inclusion
December 22, 2023
Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Basic steps should include incorporating input from diverse stakeholder cohorts, ensuring the development team embodies different kinds of diversity, establishing and reviewing metrics to capture diversity and inclusion elements throughout the AI-LC, and ensuring well-documented […]
Prioritise equitable hiring practices & career-building opportunities
December 18, 2023
Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary. The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.