Mapping the social dynamics of generative AI adoption and use 

June 1st, 2023

Designing, developing, and deploying AI ethically, responsibly, and inclusively, requires us to understand the social dynamics that will shape its impact on the whole of society. 

Project Duration: February 2023 to June 2025

Brightly coloured strings of purple, yellow, orange and red entwine together to form a human head in profile, against a grey background.

Source: Unsplash

The Challenge

The integration of Artificial Intelligence (AI) into our everyday lives has been accelerating for some time now. Yet the introduction of generative AI tools, such as ChatGPT and Google’s Bard, has marked a seismic jump forward in society’s exposure to and awareness of these new AI applications.  

Generative AI describes a type of artificial intelligence that can create new content in the form of words, images and videos in response to a person’s prompts or inputs. In contrast to traditional AI algorithms, which are designed to recognise patterns in data and make predictions, generative AI can generate novel content based on the data they have been trained on. Within this new dynamic, AI has the potential to play a highly influential role in human thinking and decision-making. 

As these new AI tools become more sophisticated and easier to use, they are being taken up rapidly. The generative AI algorithm that sits behind ChatGPT is already so sophisticated that it can even pass as human in Turing-type tests. Yet these exciting advances are also accompanied by concerns.  

From country-wide bans on ChatGPT, through fears of misinformation, plagiarism and cybercrime, to a public call to freeze all advanced AI development, there are many considerations to be addressed so that we can understand the full range of impacts associated with the adoption and use of generative AI tools.  

Our Response

CSIRO’s Responsible Innovation Future Science Platform is collaborating with Data61 to undertake research that will inform how we can responsibly adopt generative AI technology, ensuring that it is designed and deployed ethically, inclusively, and ultimately for social good. This requires an interdisciplinary approach, drawing on expertise from social science, psychology, data science and AI. 

Our project will investigate how people’s identities shape the way they interact with AI, including factors like age, gender, cultural background, social status, or level of education. This comes from the understanding that people’s social context, their sense of who they are, and where they feel they belong, all have a massive impact on the way they approach learning and using new technologies. 

By considering human thoughts, feelings, and behaviours as context-dependent, we will examine the psychological foundations that underpin human interaction with AI. These insights will identify how different people feel about their role in relation to AI, how different contexts contribute to the quality of this relationship, and how people’s identities  shape the quality and outcomes of these interactions. 

This research will survey diverse communities to map out differences in attitudes towards, and experiences with, AI. We will use experimental designs to test different cues embedded within people’s interactions with AI: this could be anything from the language used by the AI system, to the gender it presents as, to how personalised it is to the user. By measuring the impacts of these designs on people’s experiences, thinking processes, and outcomes, we will establish an evidence base identifying which factors add to and which diminish the quality of the interaction. 

Project Impact

The findings from this research will help us understand the social dynamics that shape our interactions with AI, and identify how to make these interactions more accessible, enjoyable, and productive.  

Ultimately, understanding AI from a ‘whole of society’ perspective will put us in a better position to design, develop and deploy AI responsibly, both within CSIRO and across Australia.  

Team

Sarah Bentley, Claire Naughtin

Links

AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time? 

Both humans and AI hallucinate – but not in the same way The Conversation, June 2023

Dumbing down or wising up: how will generative AI change the way we think? The Conversation, October 2023

Why knowing when to finish is as important as getting started: A thoughtful take on the outsourcing of our intelligence – what do we lose when we let machines think for us? ABC News, November 2023 – from 21.15