Who bears responsibility when AI systems go wrong?
Hi David! Tell us about your role and involvement with the Responsible Innovation Future Science Platform (RI FSP).
I’m a Postdoctoral Fellow with the RI FSP, focusing on responsible innovation in robotics and artificial intelligence (AI). I have a PhD in Philosophy from the University of Queensland, and I’ve also been a postdoc at the University of Twente in the Netherlands working in responsible innovation.
My major focus during my postdoc with the RI FSP has been investigating how we should understand moral responsibility when AI systems are used to design surgical tools that are optimised for treating individual patients that are produced using 3D printing.
My colleagues and I have just published a paper in AI and Ethics which investigates product design in the context of 3D printed surgical tools. In other words, if a surgical tool malfunctions because of a design or system failure and inflicts harm to a patient, who is to blame?
What motivated you to pursue a career in robotic technology and artificial intelligence?
I’ve been interested with computers since childhood. My first experience with AI was playing chess programs on an old Microbee computer as a child and being fascinated by the idea that you could write a program that played games better than you could. As I grew older I became interested in the ethics of how computers and AI are used, which led me to working in responsible innovation.
Going back to your recent paper – it’s interesting to think about attributing moral responsibility when dealing with tools over an individual. Can you walk us through some of the thinking behind that?
Usually if there is a defect or design flaw with a tool, the moral responsibility for the fault can be attributed to the tool’s designer or manufacturer. But if an AI system takes the place of the human designer, it’s not obvious who (if anyone) has moral responsibility for the tool it creates. If we consider the AI system to be morally responsible, we’re claiming that the AI system is a moral agent that is capable of ethical judgement. This is a strong claim to make, since if we consider the AI to be a moral agent, we’re effectively claiming that the AI is acting as a person, and that it should be treated as one. But this doesn’t accurately reflect the current or foreseeable abilities of AI systems.
On the other hand, if we claim that the AI system is not morally responsible, we now have the problem of deciding if anyone is morally responsible for the design of the tool. Potentially no one is responsible for the tool’s design, since no person was actively involved in designing it if it was designed by an AI. This creates the risk of what is called a ‘responsibility gap’, where the presence of AI creates doubt and uncertainty over who (if anyone) is morally responsible for a decision or an action.
Our research has directed us towards distinguishing between causal responsibility and different types of moral responsibility. Causal responsibility does not necessarily imply moral responsibility: a storm is not morally responsible for causing a tree to fall down, for example. We argue that whoever develops the AI system is morally responsible for what the AI designs. Different forms of responsibility are distributed between the AI itself and the people who developed it. The AI is causally responsible, while the AI’s developers are morally responsible.
Earlier this year an Australian court ruled that an AI system could be named as an inventor on a patent application. Does this mean that AI, by default, would be morally responsible for its creation? Are moral responsibility and design ownership inextricably entwined?
That’s a good question. I think moral responsibility and owning a design do not necessarily have to overlap as we can distinguish between moral and legal responsibility. Someone can be legally responsible for something, without necessarily being morally responsible for it. So an AI system could be legally responsible and recognised as the creator of a design, while the moral responsibility the design would rest with the designer of the system.
This area of medical innovation is obviously moving really quickly and these tools are already out there. How does it work when we embark on responsible innovation thinking once a technology is already out the door?
While the particular process for creating bespoke surgical tools using AI is still in development, it does draw on existing technologies that are already in use (such as 3D printing and computational design). Responsible innovation can still reflect on how existing technologies can be used in new applications and contexts. Using methods such as interviews with technology stakeholders, we can better understand how existing technologies are used, what unexpected effects it may have, and whether there are useful analogies we can apply to other technologies that are under development.
Where else can we think about the concept of distributed responsibility outside of this particular area?
The idea of sharing responsibility between an AI system and those responsible for creating it may be used, regardless of whether an AI system is causally responsible for performing actions or making decisions.
What advice might you have for those thinking of working in social science, or more specifically, responsible innovation?
The best advice I can offer is to read as widely as you can, and seek out different perspectives on society, history, and ethics. Working in social science really requires you to have skills and knowledge in a variety of areas. For responsible innovation, it’s important to have a good knowledge of technology, and the history of technology so you can see how people and societies have adapted to technological change and used technologies in unexpected ways. I think it’s also important to have a good understanding of moral philosophy to see how ethical theories can evaluate different applications of technology, and to better understand how new technologies may be used ethically or unethically.
Read more about this collaborative project involving CSIRO’s Responsible Innovation and Active Integrated Matter Future Science Platforms.