Humans are considered the core pillar of the AI ecosystem. Human-centeredness is achieved not only through the meaningful inclusion of relevant humans with diverse attributes in the building, using, monitoring, and evolution of AI systems but also through their active participation and contribution in all the decision-making points of the AI system lifecycle.

Two broad groups have been identified in the AI system lifecycle:

  • Those who will receive and use the AI system
  • Those who will design, develop, and deploy AI systems to satisfy specific stakeholder needs.

Those humans whose knowledge, lived experiences and insights are essential, need to be carefully identified, contacted, and engaged within all the relevant parts of the process.

Integrating diversity and inclusion principles and practices throughout the AI system lifecycle has an important role to play in achieving equity for all humans.

H10Operationalise inclusive and substantive community engagement

A vast body of knowledge about community engagement praxis exists. Guidelines and frameworks are updated and operationalised by practitioners from many disciplines including community cultural development, community arts, social work, social sciences, architecture, and public health. However, this vital element is largely neglected in the AI ecosystem although many AI projects would benefit from considered attention to community engagement. For instance, in the health sector, AI and advanced analytics implementation in primary care should be a collaborative effort that involves patients and communities from diverse social, cultural, and economic backgrounds in an intentional and meaningful manner. A Community Engagement Manager role could be introduced who would work with impacted communities throughout the AI-LC and for a fixed period post-deployment. Reciprocal and respectful relationships with impacted communities should be nurtured, and community expectations about both the engagement and the AI system should be defined and attended to. If impacted communities contain diverse language, ethnic, and cultural cohorts a Community Engagement Team from minority groups would be more appropriate. One role would be to develop tailored critical AI literacy programs for example. Organisations must put “the voices and experiences of those most marginalized at the centre” when implementing community engagement outcomes in an AI project.

H09Prioritise equitable hiring practices & career-building opportunities

Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary. The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.

H08Develop AI literacy and education programs

An ‘AI-ready’ person is someone who knows enough to decide how, when and if they want to engage with AI. Critical AI literacy is the pathway to such agency. Consequently, governments should drive the equitable development of AI-related skills to everyone from the earliest years via formal, informal, and extracurricular education programs covering technical and soft skills, along with awareness of digital safety and privacy issues. Governments and civil society organisations should create, and fund grant schemes aimed at enhancing the enrolment of women in AI education. Organizations also can play a critical role via paid internships and promoting community visits, talks, workshops, and engagement with AI practitioners. To harness the potential of increasing diversity and inclusion in the global AI ecosystem, such opportunities should prioritise participation (as facilitators and participants) of people with diverse attributes (including cultural, ethnic, age, gender identification, cognitive, professional, etcetera).

H07Establish inclusive AI Infrastructure 

An inclusive AI ecosystem involving the broadest range of community members requires equitable access to technical infrastructure (computing, storage, networking) to facilitate the skilling of new AI practitioners and offer opportunities for citizens’ development of AI systems. Governments should invest in computing facilities and education programs, and work with civil society organizations to support national and global networks.

H06Employ a socio-technical approach to human-centred AI 

An approach to human-in-the-loop that considers a broad set of socio-technical factors should be adopted. Relevant fields of expertise include human factors, psychology, organizational behaviour, and human-AI interaction. However, researchers from Stanford University argue that “practitioners should focus on AI in the loop”, with humans remaining in control. They advise that “all AI systems should be designed for augmenting and assisting humans – and with human impacts at the forefront.” so, they advocate the idea of “human in charge” rather than human in the loop.

H05Identify changes in the operating context

Processes to identify and respond to changes in the operating context, including the potential appearance of new user groups of users who may be treated differentially by the AI system, should be established. For example, a computational medical system trained in large metropolitan hospitals may not work as intended when used in small rural hospitals due to various factors including training of local healthcare personnel, quality of clinical data entered into the system, or behavioural factors affecting how human interaction with AI.

H04Implement inclusive and transparent feedback mechanisms for stakeholders

Users should have accessible mechanisms to identify and report harmful or concerning AI system incidents and impacts, with such warnings shareable among relevant stakeholders. Feedback should be continuously incorporated into system updates and communicated to relevant stakeholders.

H03Reflect collectively on key questions – AI why, for whom, and by whom?

Key questions about why an AI project should happen, for who is the project for, and by whom should it be developed should be asked, answered, and revisited collectively using a diversity and inclusion lens during the AI-LC. Views from stakeholders and representatives of impacted communities should be sought. Although it might be advantageous that […]

H02Identify stakeholder knowledge and needs

Stakeholders generally hold specific knowledge, expertise, concerns, and objectives that can contribute to effective AI system design. Stakeholder expectations, needs and feedback throughout the AI-LC should be considered. Cohorts include government regulatory bodies, and civil society organizations monitoring AI impact and advocating users’ rights, industry, and people affected by AI systems. There are groups whose knowledge or expertise is valuable for AI system design, but they do not necessarily have needs or requirements for the system because they will not be users or consumers. Both groups need to be involved.

H01Integrate diversity and inclusion principles and practice throughout the AI lifecycle

Integrating diversity and inclusion principles and practices throughout the lifecycle of AI has an important role in achieving equity for all stakeholders. In particular, the integration of diversity and inclusion principles and practices through the engagement of diverse stakeholders is important. The composition of different levels of stakeholder cohorts should maintain diversity along social lines (race, gender identification, age, ability, and viewpoints) where bias is a concern. End-users, AI practitioners, subject matter experts, and interdisciplinary professionals including those from the law, social sciences and community development should be involved to identify downstream impacts comprehensively.

Artificial Intelligence Ecosystem process diagram

A process diagram showing the application of Human, Data, Process, System and Governance elements to Diversity and Inclusion in Artificial Intelligence.

Artificial Intelligence Ecosystem process diagram