Implement inclusive tech governance practices
December 22, 2023
Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]
Follow AI risk assessment frameworks
December 22, 2023
Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards. AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]
Align AI bias mitigation with relevant legislation
December 22, 2023
Bias mitigation should be aligned with relevant existing and emerging legal standards. This includes national and state laws covering AI use in hiring, eligibility decisions (e.g., credit, housing, education), discrimination prohibitions (e.g., race, gender, religion, age, disability status), privacy, and unfair or deceptive practices.
Triage and tier AI bias risks
December 22, 2023
AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, […]
Evaluate, adjust, and document bias identification and mitigation measures
December 22, 2023
During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.
Employ model design techniques attuned to diversity and inclusion considerations
December 22, 2023
Diverse values and cultural perspectives from multiple stakeholders and populations should be codified in mathematical models and AI system design. Model design techniques are necessarily contextual, related to the type of AI technology, the purpose and scope of the system, how users will be impacted, and so forth. However, basic steps should include incorporating input […]
Understand AI systems through a holistic lens
December 22, 2023
Code is not the right level of abstraction at which to understand AI systems, whether is for accountability or adaptability. Instead, systems should be analyzed in terms of inputs and outputs, overall design, embedded values, and how the software system fits with the overall institution deploying it.
Collect demographic data from users to aid bias monitoring
December 22, 2023
Monitoring for bias should collect demographic data from users including age and gender identity to enable the calculation of assessment measures.
Test and evaluate bias characteristics during deployment
December 22, 2023
The deploying organisation and other stakeholders should use documented model specifications to test and evaluate bias characteristics during deployment in the specific context.
Monitor and audit changing AI system impacts
December 22, 2023
It is critical to monitor the use of advanced analytics and AI technology to ensure that benefits are accruing to diverse groups in an equitable manner. The scale of AI system impact can change rapidly and unevenly when deployed. Organisations should build resilience, flexibility, and sensitivity to respond to changes to ensure equitable and inclusive outcomes.