Showing 1 – 10 of 19

December 22, 2023

Monitoring for bias should collect demographic data from users including age and gender identity to enable the calculation of assessment measures.

December 22, 2023

AI systems’ learning capabilities evolve. External contexts such as climate, energy, health, economy, environment, political circumstances, and operating contexts also change. Therefore, both AI systems and the environment in which they operate should be continuously monitored and reassessed using appropriate metrics and mitigation processes, including methods to identify the potential appearance of new user groups […]

December 22, 2023

Rather than thinking of fairness as a separate initiative, it’s important to apply fairness analysis throughout the entire process, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. The use of Model Performance Management tools or other methods should be considered to identify and mitigate any instances of intersectional unfairness. […]

December 22, 2023

Subject matter experts should create and oversee effective validation processes addressing bias-related challenges including noisy labelling (for example, mislabeled samples in training data), use of proxy variables, and performing system tests under optimal conditions unrepresentative of real-world deployment context.

December 22, 2023

For example, before embedding gender classification into a facial analysis service or incorporating gender into image labelling, it is important to consider what purpose gender is serving. Furthermore, it is important to consider how gender will be defined, and whether that perspective is unnecessarily exclusionary (for example, non-binary). Therefore, stakeholders involved in the development of […]

December 22, 2023

At the start of the Pre-Design stage, stakeholders should identify possible systemic problems of bias such as racism, sexism, or ageism that have implications for diversity and inclusion. Main decision-makers and power holders should be identified, as this can reflect systemic biases and limited viewpoints within the organisation.  A sole person responsible for algorithmic bias ̶ […]

December 22, 2023

Apply more inclusive and socially just data labelling methodologies such as Intersectional Labeling Methodology to address gender bias. Rather than relying on static, binary gender in a face classification infrastructure, application designers should embrace and demand improvements, to feature-based labelling. For instance, labels based on neutral performative markers (e.g., beard, makeup, dress) could replace gender classification in the facial analysis model, allowing third parties and individuals who come into contact with facial analysis applications to embrace their own interpretations of those features. Instead of focusing on improving methods of gender classification, application designers could use labelling alongside other qualitative data such as Instagram captions to formulate more precise notions about user identity.

December 22, 2023

Developers should attend to and document the social descriptors (for example, age, gender, and geolocation) when scraping data from different sources including websites, databases, social media platforms, enterprise applications, or legacy systems. Context is important when the same data is later used for different purposes such as asking a new question about an existing data set. A compatibility analysis should be performed to ensure that potential sources of bias are identified, and mitigation plans made. This analysis would capture context shifts in new uses of data sets, identifying whether or how these could produce specific bias issues.

December 22, 2023

Dataset suitability factors should be assessed. This includes statistical methods for mitigating representation issues, the socio-technical context of deployment, and interaction of human factors with the AI system. The question of whether suitable datasets exist that fit the purpose of the various applications, domains, and tasks for the planned AI system should be asked.

December 22, 2023

Context should be taken into consideration during model selection to avoid or limit biased results for sub-populations. Caution should be taken in systems designed to use aggregated data about groups to predict individual behaviour as biased outcomes can occur. “Unintentional weightings of certain factors can cause algorithmic results that exacerbate and reinforce societal inequities,” for example, predicting educational performance based on an individual’s racial or ethnic identity. Observed context drift in data should be documented via data transparency mechanisms capturing where and how the data is used and its appropriateness for that context. Harvard researchers have expanded the definition of data transparency, noting that some raw data sets are too sensitive to be released publicly, and incorporating guidance on development processes to reduce the risk of harmful and discriminatory impacts: • “In addition to releasing training and validation data sets whenever possible, agencies shall make publicly available summaries of relevant statistical properties of the data sets that can aid in interpreting the decisions made using the data, while applying state-of-the-art methods to preserve the privacy of individuals. • When appropriate, privacy-preserving synthetic data sets can be released in lieu of real data sets to expose certain features of the data if real data sets are sensitive and cannot be released to the public.” Teams should use transparency frameworks and independent standards; conduct and publish the results of independent audits; open non-sensitive data and source code to outside inspection.