An operationalised guideline for responsible AI
Project Duration: September 2021 to June 2023
Operationalising Ethical AI principles into practice
The Challenge
Artificial intelligence (AI) is helping to solve real-world challenges and transforming industries around the world – delivering better products and services, and making them faster, cheaper and safer.
The exponential growth of the Internet of Things (IoT) over the last decade has normalised the interconnectedness of platforms and devices like appliances and monitoring devices. Humans are also growing increasingly connected to their devices, giving rise to industries such as precision health.
Serious concerns remain, however, around the ability of AI to behave and make decisions in a responsible way. In recent years, many ethical regulations, principles and guidelines for responsible AI have been issued by governments, research organisations and enterprises. This has largely been in the form of high-level advice, rather than concrete guidance on how to implement AI responsibly. For example, one of the principles of the Australian AI Ethics Framework states that AI systems should respect “human-centred values”.
Making this advice usable for the developers of such systems is a separate and challenging task. For instance, how can these values be designed for, implemented and tracked in an AI system? Can the extent to which an AI system adheres to an ethical AI principle be measured?
Our Response
CSIRO’s Responsible Innovation (RI) Future Science Platform and Data61 Business Unit are collaborating to develop concrete and operationalised software engineering guidelines for developers and technologists to develop AI systems responsibly.
By co-designing integrated guidelines with RI scientists, designers and data analysts, CSIRO is seeking to ensure that the AI systems it creates are not only trustworthy but are trusted by those who use and rely on them across the entire AI systems lifecycle.
Project Impact
This research, which we consider to be a first of its kind, seeks to fill a gap in responsible AI regulation by providing developers and technologists with operationalised concrete reusable design and development guidelines for achieving responsible AI. Applying these guidelines may increase the quality of AI systems’ outcomes. At the same time, we could expect there to be a reduction in the risk of ethical failures in AI systems, for example, data privacy breaches.
The multidisciplinary project team draws on expertise in software engineering, machine learning, social science, human-machine interaction, and user experience. This multidisciplinary approach is essential for not only advancing CSIRO’s future science and technology research in AI across its many applications but also ensuring the delivery of clear social benefits across those multiple domains. It will also help to establish CSIRO capability for embedding responsible AI into broader scientific discovery and technology development processes.
The guidelines will be made available to academic researchers and industrial practitioners for use in achieving responsible AI. Doing so will help make responsible AI a competitive advantage of Australian industry while ensuring Australia’s development of AI, in its many guises, is safe, secure and reliable.
Team
Qinghua Lu (Project Lead), Conrad Sanderson, Andreas Duenser, David Douglas and Georgina Ibarra
Advisory Group: Jon Whittle, Justine Lacey, Stefan Hajkowicz, Glenn Newnham, Cathy Robinson
References
L. Zhu, X. Xu, Q. Lu, G. Governatori, and J. Whittle, “AI and Ethics -Operationalising Responsible AI,” Humanity Driven AI: Productivity, Wellbeing, Sustainability and Partnership, 2021. https://arxiv.org/abs/2105.08867