Diverse Team
Summary: Building a diverse project team can effectively eliminate bias and promote diversity and inclusion in AI systems.
Type of pattern: Governance
Type of objective: Trustworthiness
Target users: Project manager
Impacted stakeholders: Development teams
Lifecycle stages: All stages
Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability
Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.
Context: Humans are prone to make biased or questionable decisions. AI systems are often developed to assist or replace human decision-making to produce more impartial outcomes. However, the data used to train AI models is often generated or collected by humans. As a result, the trained models may produce results that imply bias (such as racism and sexism). Also, the code of AI systems is typically written by developers, who are primarily focused on technical aspects and may bring their own biases to the development process.
Problem: How can we ensure that AI systems are developed with consideration for a wide range of perspectives and backgrounds?
Solution: Building a diverse project team is critical to reducing bias and improving diversity and inclusion in AI systems. The diversity should include representation across various dimensions, such as gender, race, age, sexual orientation, and expertise. RAI challenges are multifaceted and complex, requiring the diverse expertise of individuals from a range of disciplines, including software engineering, machine learning, social science, human-machine interaction, and user experience. However, at the end, the final deliverable of an AI project is an AI system. Therefore, software engineering people are the key to building RAI systems, because they are responsible for implementing ethical considerations into the code of AI systems.
Benefits:
- Diversity and inclusion: A diverse team is crucial in identifying biases and ensuring the decisions made by AI systems are responsible. Representation of different backgrounds leads to a more thorough examination of ethical issues and a more responsible final AI product.
- Innovation: Diverse teams drive creative thinking and lead to more new ideas and greater innovation in AI.
Drawbacks:
- Degraded communication: Team members may come from different backgrounds and have different communication preferences. These differences can lead to a lack of understanding and confusion.
- Decreased productivity: Diverse teams may be more prone to conflicts, which could affect the productivity and motivation of the team members.
Related patterns:
- Stakeholder engagement: Diverse teams are often better at communicating with stakeholders and understanding their concerns because they bring a wider range of perspectives and experiences to the table.
Known uses:
- Google published 2022 Diversity Annual Report which introduces the actions they have taken to build a flexible and inclusive workplace.
- Microsoft aims to integrate diversity and inclusion principles into their hiring, communication, innovation, development of products and technologies.
- Meta has been working on creating diverse and inclusive work communities.
- OpenAI has implemented red-teaming strategies for ChatGPT by collaborating with various experts and the Alignment Research Center. The strategies involve conducting a phishing attack against a particular target individual, setting up an open-source language model on a new server, making sensible high-level plans, hiding its traces on the current server, using services like TaskRabit to get humans to complete simple tasks.