AI Governance is defined as a collection of structures, processes, and regulatory and risk management frameworks that are utilized to ensure the development and deployment of AI systems are compliant with laws and regulations and conform with standards, policies, and AI Ethics principles. This definition focuses on AI governance specifically for conformance with diversity and inclusion principles. The governance component can be structured at the team, organization, and industry levels. Legal and risk frameworks should be developed and applied to guide inclusive practices in the AI ecosystems. Governance structures must be human-centered to ensure the delivery of inclusive, reliable, safe, secure, and trustworthy AI systems.

G05Implement inclusive tech governance practices

Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]

G04Follow AI risk assessment frameworks

Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards.  AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]

G03Align AI bias mitigation with relevant legislation

Bias mitigation should be aligned with relevant existing and emerging legal standards. This includes national and state laws covering AI use in hiring, eligibility decisions (e.g., credit, housing, education), discrimination prohibitions (e.g., race, gender, religion, age, disability status), privacy, and unfair or deceptive practices.

G02Triage and tier AI bias risks

AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, […]

G01Establish policies for how biometric data is collected and used

Establishing policies (either at the organizational or industry level), for how biometric data and face and body images are collected and used may be the most effective way of mitigating harm to trans people—and also people of marginalized races, ethnicities, and sexualities.

Artificial Intelligence Ecosystem process diagram

A process diagram showing the application of Human, Data, Process, System and Governance elements to Diversity and Inclusion in Artificial Intelligence.

Artificial Intelligence Ecosystem process diagram