Showing 1 – 10 of 19

December 22, 2023

Organizations should Implement responsible AI leadership, drawing on existing resources such as UC Berkeley’s Equity Fluent Leadership Playbook. They should engage personnel to implement and monitor compliance with AI ethics principles, and train leaders to operationalize AI and data governance and measure engagement. The governance mechanisms/guidelines should be connected with lower-level development/design patterns. E.g., the risk assessment framework can be […]

December 22, 2023

Teams should develop diversity and inclusion policies and procedures addressing key roles, responsibilities, and processes within the organisations that are adopting AI. Bias risk management policies should specify how risks of bias will be mapped and measured, and according to what standards.  AI risk practice and associated checks and balances should be embedded and ingrained throughout all […]

December 22, 2023

Code is not the right level of abstraction at which to understand AI systems, whether is for accountability or adaptability. Instead, systems should be analyzed in terms of inputs and outputs, overall design, embedded values, and how the software system fits with the overall institution deploying it.

December 22, 2023

Partner with ethicists and antiracism experts in developing, training, testing, and implementing models. Recruit diverse and representative populations in training samples.

December 22, 2023

A Human-centered design (HCD) methodology, based on International Organization for Standardization (ISO) standard 9241-210:2019, for the development of AI systems, could comprise:  • Defining the Context of Use, including operational environment, user characteristics, tasks, and social environment; • • Determining the User & Organizational Requirements, including business requirements, user requirements, and technical requirements; • • […]

December 22, 2023

Teams should engage with the complexity in which people experience values and technology in daily life. Values should be understood holistically and as being interrelated, rather than being analyzed in isolation from one another.

December 22, 2023

For example, before embedding gender classification into a facial analysis service or incorporating gender into image labelling, it is important to consider what purpose gender is serving. Furthermore, it is important to consider how gender will be defined, and whether that perspective is unnecessarily exclusionary (for example, non-binary). Therefore, stakeholders involved in the development of […]

December 22, 2023

Context should be taken into consideration during model selection to avoid or limit biased results for sub-populations. Caution should be taken in systems designed to use aggregated data about groups to predict individual behaviour as biased outcomes can occur. “Unintentional weightings of certain factors can cause algorithmic results that exacerbate and reinforce societal inequities,” for example, predicting educational performance based on an individual’s racial or ethnic identity. Observed context drift in data should be documented via data transparency mechanisms capturing where and how the data is used and its appropriateness for that context. Harvard researchers have expanded the definition of data transparency, noting that some raw data sets are too sensitive to be released publicly, and incorporating guidance on development processes to reduce the risk of harmful and discriminatory impacts: • “In addition to releasing training and validation data sets whenever possible, agencies shall make publicly available summaries of relevant statistical properties of the data sets that can aid in interpreting the decisions made using the data, while applying state-of-the-art methods to preserve the privacy of individuals. • When appropriate, privacy-preserving synthetic data sets can be released in lieu of real data sets to expose certain features of the data if real data sets are sensitive and cannot be released to the public.” Teams should use transparency frameworks and independent standards; conduct and publish the results of independent audits; open non-sensitive data and source code to outside inspection.

December 22, 2023

Access, including cloud and offline data hosting, should be attended to because government and industry generally build and manage these on their own terms. Access is directly connected to capacity building (teams and stakeholders) and data sovereignty issues.

December 22, 2023

The concept of, and practices supporting, data sovereignty is a critical element in the AI ecosystem. It covers considerations of the “use, management and ownership of AI to house, analyze and disseminate valuable or sensitive data”. Although definitions are context-dependent, operationally data sovereignty refers to stakeholders within an AI ecosystem, ad other relevant representatives from outside stakeholder cohorts to be included as partners throughout the AI-LC. Data sovereignty should be explored from and with the perspectives of those whose data is being used. These alternative and diverse perspectives can be captured and fed back into AI Literacy programs, exemplifying how people can affect and enrich AI both conceptually and materially. Various Indigenous technologists, researchers, artists, and activists have progressed the concept of, and protocols for, Indigenous data sovereignty in AI. This involves “Indigenous control over the protection and use of data that is collected from our communities, including statistics, cultural knowledge and even user data,” and moving beyond the representation of impacted users to “maximising the generative capacity of truly diverse groups.”