Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]
Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards. AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]
Bias mitigation should be aligned with relevant existing and emerging legal standards. This includes national and state laws covering AI use in hiring, eligibility decisions (e.g., credit, housing, education), discrimination prohibitions (e.g., race, gender, religion, age, disability status), privacy, and unfair or deceptive practices.
AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, […]
Establishing policies (either at the organizational or industry level), for how biometric data and face and body images are collected and used may be the most effective way of mitigating harm to trans people—and also people of marginalized races, ethnicities, and sexualities.