Implement inclusive tech governance practices
December 22, 2023
Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]
Follow AI risk assessment frameworks
December 22, 2023
Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards. AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]
Align AI bias mitigation with relevant legislation
December 22, 2023
Bias mitigation should be aligned with relevant existing and emerging legal standards. This includes national and state laws covering AI use in hiring, eligibility decisions (e.g., credit, housing, education), discrimination prohibitions (e.g., race, gender, religion, age, disability status), privacy, and unfair or deceptive practices.
Triage and tier AI bias risks
December 22, 2023
AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, […]
Evaluate, adjust, and document bias identification and mitigation measures
December 22, 2023
During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.
Undertake holistic monitoring of external impacts
December 22, 2023
AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]
Assess the suitability of Human-centered design (HCD) methodology for AI system development
December 22, 2023
A Human-centered design (HCD) methodology, based on International Organization for Standardization (ISO) standard 9241-210:2019, for the development of AI systems, could comprise: • Defining the Context of Use, including operational environment, user characteristics, tasks, and social environment; • • Determining the User & Organizational Requirements, including business requirements, user requirements, and technical requirements; • • […]
Evaluate, adjust, and document bias identification and mitigation measures
December 22, 2023
During model training and implementation, the effectiveness of bias mitigation should be evaluated and adjusted. Periodically assess bias identification processes and address any gaps. The model specification should include how and what sources of bias were identified, mitigation techniques used, and how successful mitigation was. A related performance assessment should be undertaken before model deployment.
Understand and adhere to data sovereignty praxis
December 22, 2023
The concept of, and practices supporting, data sovereignty is a critical element in the AI ecosystem. It covers considerations of the “use, management and ownership of AI to house, analyze and disseminate valuable or sensitive data”. Although definitions are context-dependent, operationally data sovereignty refers to stakeholders within an AI ecosystem, ad other relevant representatives from outside stakeholder cohorts to be included as partners throughout the AI-LC. Data sovereignty should be explored from and with the perspectives of those whose data is being used. These alternative and diverse perspectives can be captured and fed back into AI Literacy programs, exemplifying how people can affect and enrich AI both conceptually and materially. Various Indigenous technologists, researchers, artists, and activists have progressed the concept of, and protocols for, Indigenous data sovereignty in AI. This involves “Indigenous control over the protection and use of data that is collected from our communities, including statistics, cultural knowledge and even user data,” and moving beyond the representation of impacted users to “maximising the generative capacity of truly diverse groups.”
Establish clear procedures for ensuring data privacy and offering opt-out options
December 22, 2023
Data privacy should be at the forefront, particularly when data from marginalized populations are involved. End users should be offered choices about privacy and ethics in the collection, storage, and use of data. Opt-out methods for data collected for model training and model application should be offered where possible.