Showing 11 – 19 of 19

December 22, 2023

Data privacy should be at the forefront, particularly when data from marginalized populations are involved. End users should be offered choices about privacy and ethics in the collection, storage, and use of data. Opt-out methods for data collected for model training and model application should be offered where possible.

December 18, 2023

A vast body of knowledge about community engagement praxis exists. Guidelines and frameworks are updated and operationalised by practitioners from many disciplines including community cultural development, community arts, social work, social sciences, architecture, and public health. However, this vital element is largely neglected in the AI ecosystem although many AI projects would benefit from considered attention to community engagement. For instance, in the health sector, AI and advanced analytics implementation in primary care should be a collaborative effort that involves patients and communities from diverse social, cultural, and economic backgrounds in an intentional and meaningful manner. A Community Engagement Manager role could be introduced who would work with impacted communities throughout the AI-LC and for a fixed period post-deployment. Reciprocal and respectful relationships with impacted communities should be nurtured, and community expectations about both the engagement and the AI system should be defined and attended to. If impacted communities contain diverse language, ethnic, and cultural cohorts a Community Engagement Team from minority groups would be more appropriate. One role would be to develop tailored critical AI literacy programs for example. Organisations must put “the voices and experiences of those most marginalized at the centre” when implementing community engagement outcomes in an AI project.

December 18, 2023

Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary. The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.

December 18, 2023

An approach to human-in-the-loop that considers a broad set of socio-technical factors should be adopted. Relevant fields of expertise include human factors, psychology, organizational behaviour, and human-AI interaction. However, researchers from Stanford University argue that “practitioners should focus on AI in the loop”, with humans remaining in control. They advise that “all AI systems should be designed for augmenting and assisting humans – and with human impacts at the forefront.” so, they advocate the idea of “human in charge” rather than human in the loop.

November 24, 2023

New or emergent stakeholder cohorts should participate in system monitoring and retraining. Stakeholders should be involved in a final review and sign-off, particularly if their input propelled significant changes in design or development processes. After validation, teams should obtain informed consent on the developed product features from impacted stakeholders, to track and respond to the […]

November 24, 2023

In the design stage, decisions should weigh the social-technical implications of the multiple trade-offs inherent in AI systems. These trade-offs include the system’s predictive accuracy which is measured by several metrics. The metrics include accuracies within sub-populations or across different use cases, as partial and total accuracies. Fairness outcomes for different sub-groups of people the […]

November 24, 2023

New stakeholders for iterative rounds of product development, training, and testing should be brought in, and beta groups for test deployments should be recruited. User groups should reflect different needs and abilities. Fresh perspectives contribute to the evaluation of both the AI system’s functionality and, importantly, its level and quality of inclusivity. New or emergent […]

November 24, 2023

Key questions about why an AI project should happen, for who is the project for, and by whom should it be developed should be asked, answered, and revisited collectively using a diversity and inclusion lens during the AI-LC. Views from stakeholders and representatives of impacted communities should be sought. Although it might be advantageous that […]

November 24, 2023

Integrating diversity and inclusion principles and practices throughout the lifecycle of AI has an important role in achieving equity for all stakeholders. In particular, the integration of diversity and inclusion principles and practices through the engagement of diverse stakeholders is important. The composition of different levels of stakeholder cohorts should maintain diversity along social lines (race, gender identification, age, ability, and viewpoints) where bias is a concern. End-users, AI practitioners, subject matter experts, and interdisciplinary professionals including those from the law, social sciences and community development should be involved to identify downstream impacts comprehensively.