Showing 11 – 13 of 13

December 22, 2023

Apply more inclusive and socially just data labelling methodologies such as Intersectional Labeling Methodology to address gender bias. Rather than relying on static, binary gender in a face classification infrastructure, application designers should embrace and demand improvements, to feature-based labelling. For instance, labels based on neutral performative markers (e.g., beard, makeup, dress) could replace gender classification in the facial analysis model, allowing third parties and individuals who come into contact with facial analysis applications to embrace their own interpretations of those features. Instead of focusing on improving methods of gender classification, application designers could use labelling alongside other qualitative data such as Instagram captions to formulate more precise notions about user identity.

December 22, 2023

Developers should attend to and document the social descriptors (for example, age, gender, and geolocation) when scraping data from different sources including websites, databases, social media platforms, enterprise applications, or legacy systems. Context is important when the same data is later used for different purposes such as asking a new question about an existing data set. A compatibility analysis should be performed to ensure that potential sources of bias are identified, and mitigation plans made. This analysis would capture context shifts in new uses of data sets, identifying whether or how these could produce specific bias issues.

November 24, 2023

In the design stage, decisions should weigh the social-technical implications of the multiple trade-offs inherent in AI systems. These trade-offs include the system’s predictive accuracy which is measured by several metrics. The metrics include accuracies within sub-populations or across different use cases, as partial and total accuracies. Fairness outcomes for different sub-groups of people the […]