Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary.
The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.