Showing 1 – 6 of 6

December 22, 2023

Code is not the right level of abstraction at which to understand AI systems, whether is for accountability or adaptability. Instead, systems should be analyzed in terms of inputs and outputs, overall design, embedded values, and how the software system fits with the overall institution deploying it.

December 22, 2023

Evaluation, even on crowdsourcing platforms used by ordinary people, should capture end users’ types of interactions and decisions. The evaluations should demonstrate what happens when the algorithm is integrated into a human decision-making process. Does that alter or improve the decision and the resultant decision-making process as revealed by the downstream outcome?

December 18, 2023

Data science teams should be as diverse as the populations that the built AI systems will affect. Product teams leading and working on AI projects should be diverse and representative of impacted user cohorts. Diversity, equity, and inclusion in the composition of teams training, testing and deploying AI systems should be prioritized as the diversity of experience, expertise, and backgrounds is both a critical risk mitigant and a method of broadening AI system designers’ and engineers’ perspectives. For example, female-identifying role models should be fostered in AI projects. Diversity and inclusion employment targets and strategies should be regularly monitored and adjusted if necessary. The WEF Blueprint recommends four levers. First, widening career paths by employing people from non-traditional AI backgrounds, embedding this goal in strategic workplace planning. For instance, backgrounds in marketing, social media marketing, social work, education, public health, and journalism can contribute fresh perspectives and expertise. Second, diversity and inclusion should be covered in training and development programs via mentorships, job shadowing, simulation exercises, and contact with diverse end user panels. Third, partnerships with academic, civil society and public sector institutions should be established to contribute to holistic and pan-disciplinary reviews of AI systems, diversity and inclusion audits, and assessment of social impacts. Fourth, a workplace culture of belonging should be created and periodically assessed via both open and confidential feedback mechanisms which include diversity markers.

December 18, 2023

Processes to identify and respond to changes in the operating context, including the potential appearance of new user groups of users who may be treated differentially by the AI system, should be established. For example, a computational medical system trained in large metropolitan hospitals may not work as intended when used in small rural hospitals due to various factors including training of local healthcare personnel, quality of clinical data entered into the system, or behavioural factors affecting how human interaction with AI.

November 24, 2023

A project owner (individual or organisation) with suitable expertise and resources to manage an AI system project should be identified, ensuring that accountability mechanisms to counter potential harm are built in. It should be decided which other stakeholders will be involved in the system’s development and regulation. Both intended and unintended impacts that the AI […]

November 24, 2023

New stakeholders for iterative rounds of product development, training, and testing should be brought in, and beta groups for test deployments should be recruited. User groups should reflect different needs and abilities. Fresh perspectives contribute to the evaluation of both the AI system’s functionality and, importantly, its level and quality of inclusivity. New or emergent […]