Showing 31 – 40 of 46

December 18, 2023

Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary. The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.

December 18, 2023

An ‘AI-ready’ person is someone who knows enough to decide how, when and if they want to engage with AI. Critical AI literacy is the pathway to such agency. Consequently, governments should drive the equitable development of AI-related skills to everyone from the earliest years via formal, informal, and extracurricular education programs covering technical and soft skills, along with awareness of digital safety and privacy issues. Governments and civil society organisations should create, and fund grant schemes aimed at enhancing the enrolment of women in AI education. Organizations also can play a critical role via paid internships and promoting community visits, talks, workshops, and engagement with AI practitioners. To harness the potential of increasing diversity and inclusion in the global AI ecosystem, such opportunities should prioritise participation (as facilitators and participants) of people with diverse attributes (including cultural, ethnic, age, gender identification, cognitive, professional, etcetera).

December 18, 2023

An inclusive AI ecosystem involving the broadest range of community members requires equitable access to technical infrastructure (computing, storage, networking) to facilitate the skilling of new AI practitioners and offer opportunities for citizens’ development of AI systems. Governments should invest in computing facilities and education programs, and work with civil society organizations to support national and global networks.

December 18, 2023

An approach to human-in-the-loop that considers a broad set of socio-technical factors should be adopted. Relevant fields of expertise include human factors, psychology, organizational behaviour, and human-AI interaction. However, researchers from Stanford University argue that “practitioners should focus on AI in the loop”, with humans remaining in control. They advise that “all AI systems should be designed for augmenting and assisting humans – and with human impacts at the forefront.” so, they advocate the idea of “human in charge” rather than human in the loop.

December 18, 2023

Processes to identify and respond to changes in the operating context, including the potential appearance of new user groups of users who may be treated differentially by the AI system, should be established. For example, a computational medical system trained in large metropolitan hospitals may not work as intended when used in small rural hospitals due to various factors including training of local healthcare personnel, quality of clinical data entered into the system, or behavioural factors affecting how human interaction with AI.

December 18, 2023

Users should have accessible mechanisms to identify and report harmful or concerning AI system incidents and impacts, with such warnings shareable among relevant stakeholders. Feedback should be continuously incorporated into system updates and communicated to relevant stakeholders.

November 24, 2023

For data collection involving human subjects, why, how and by whom data is being collected should be established in the Pre-Design stage. Potential data challenges or data bias issues that have implications for diversity and inclusion should be identified by key stakeholders and data scientists. For example, in the health application domain, diverse data sources […]

November 24, 2023

New or emergent stakeholder cohorts should participate in system monitoring and retraining. Stakeholders should be involved in a final review and sign-off, particularly if their input propelled significant changes in design or development processes. After validation, teams should obtain informed consent on the developed product features from impacted stakeholders, to track and respond to the […]

November 24, 2023

In the design stage, decisions should weigh the social-technical implications of the multiple trade-offs inherent in AI systems. These trade-offs include the system’s predictive accuracy which is measured by several metrics. The metrics include accuracies within sub-populations or across different use cases, as partial and total accuracies. Fairness outcomes for different sub-groups of people the […]

November 24, 2023

Mechanisms enabling an iterative process of continuous monitoring and improvement of diversity and inclusion considerations should be established from the outset. These will help ensure that all stakeholders’ needs are met, and that inadvertent harm is not caused. Both team and system performance should be regularly assessed, improvements identified, and changes executed accordingly.