Recognize relationships between access issues, infrastructure, capacity building, and data sovereignty
December 22, 2023
Access, including cloud and offline data hosting, should be attended to because government and industry generally build and manage these on their own terms. Access is directly connected to capacity building (teams and stakeholders) and data sovereignty issues.
Understand and adhere to data sovereignty praxis
December 22, 2023
The concept of, and practices supporting, data sovereignty is a critical element in the AI ecosystem. It covers considerations of the “use, management and ownership of AI to house, analyze and disseminate valuable or sensitive data”. Although definitions are context-dependent, operationally data sovereignty refers to stakeholders within an AI ecosystem, ad other relevant representatives from outside stakeholder cohorts to be included as partners throughout the AI-LC. Data sovereignty should be explored from and with the perspectives of those whose data is being used. These alternative and diverse perspectives can be captured and fed back into AI Literacy programs, exemplifying how people can affect and enrich AI both conceptually and materially. Various Indigenous technologists, researchers, artists, and activists have progressed the concept of, and protocols for, Indigenous data sovereignty in AI. This involves “Indigenous control over the protection and use of data that is collected from our communities, including statistics, cultural knowledge and even user data,” and moving beyond the representation of impacted users to “maximising the generative capacity of truly diverse groups.”
Establish clear procedures for ensuring data privacy and offering opt-out options
December 22, 2023
Data privacy should be at the forefront, particularly when data from marginalized populations are involved. End users should be offered choices about privacy and ethics in the collection, storage, and use of data. Opt-out methods for data collected for model training and model application should be offered where possible.
Involve stakeholders and ‘non-experts’ in the selection, collection, and analysis of demographically representative qualitative data
December 22, 2023
Representatives of impacted stakeholders should be identified and partnered with on data collection methods. This is particularly important when identifying new or non-traditional data-gathering resources and methods. To increase representativeness and responsible interpretation, when collecting and analyzing specific datasets include diverse viewpoints and not only those of experts. Technology or datasets deemed non-problematic by one group may be predicted to be disastrous by others. Training data sets should be demographically representative of the cohorts or communities on whom the AI system will impact.
Establish a clear rationale for data collection
November 24, 2023
For data collection involving human subjects, why, how and by whom data is being collected should be established in the Pre-Design stage. Potential data challenges or data bias issues that have implications for diversity and inclusion should be identified by key stakeholders and data scientists. For example, in the health application domain, diverse data sources […]
Consider multiple trade-offs
November 24, 2023
In the design stage, decisions should weigh the social-technical implications of the multiple trade-offs inherent in AI systems. These trade-offs include the system’s predictive accuracy which is measured by several metrics. The metrics include accuracies within sub-populations or across different use cases, as partial and total accuracies. Fairness outcomes for different sub-groups of people the […]
Establish inclusive and informed product development, training, evaluation, and sign-off
November 24, 2023
New stakeholders for iterative rounds of product development, training, and testing should be brought in, and beta groups for test deployments should be recruited. User groups should reflect different needs and abilities. Fresh perspectives contribute to the evaluation of both the AI system’s functionality and, importantly, its level and quality of inclusivity. New or emergent […]
Reflect collectively on key questions – AI why, for whom, and by whom?
November 24, 2023
Key questions about why an AI project should happen, for who is the project for, and by whom should it be developed should be asked, answered, and revisited collectively using a diversity and inclusion lens during the AI-LC. Views from stakeholders and representatives of impacted communities should be sought. Although it might be advantageous that […]
Identify stakeholder knowledge and needs
November 24, 2023
Stakeholders generally hold specific knowledge, expertise, concerns, and objectives that can contribute to effective AI system design. Stakeholder expectations, needs and feedback throughout the AI-LC should be considered. Cohorts include government regulatory bodies, and civil society organizations monitoring AI impact and advocating users’ rights, industry, and people affected by AI systems. There are groups whose knowledge or expertise is valuable for AI system design, but they do not necessarily have needs or requirements for the system because they will not be users or consumers. Both groups need to be involved.