Showing 1 – 10 of 20

December 22, 2023

Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]

December 22, 2023

Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards.  AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]

December 22, 2023

Bias mitigation should be aligned with relevant existing and emerging legal standards. This includes national and state laws covering AI use in hiring, eligibility decisions (e.g., credit, housing, education), discrimination prohibitions (e.g., race, gender, religion, age, disability status), privacy, and unfair or deceptive practices.

December 22, 2023

AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, […]

December 22, 2023

During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.

December 22, 2023

AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]

December 22, 2023

Partner with ethicists and antiracism experts in developing, training, testing, and implementing models. Recruit diverse and representative populations in training samples.

December 22, 2023

Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]

December 22, 2023

Teams should engage with the complexity in which people experience values and technology in daily life. Values should be understood holistically and as being interrelated, rather than being analyzed in isolation from one another.

December 22, 2023

Subject matter experts should create and oversee effective validation processes addressing bias-related challenges including noisy labelling (for example, mislabeled samples in training data), use of proxy variables, and performing system tests under optimal conditions unrepresentative of real-world deployment context.