Ethical AI – is it possible without a human-in-the-loop?

December 15th, 2020

CSIRO hosts a debate on the question of whether Artificial Intelligence could be ethical without a human-in-the-loop.

By Rebecca Coates

Close up of Asian women with hi tech digital technology screen over the eye.

Credit: iStock.com/kontekbrothers

The use of artificial intelligence (AI) technology globally is ever-increasing across multiple sectors, and with huge diversity in its applications. While there may be many positive gains to society from the use of AI, there are also significant ethical considerations and risks.

Questions of the ethics of AI development and application are key in ensuring responsible innovation. As part of the recent MARS 2020 conference organised by the Machine Learning and Artificial Intelligence FSP (MLAI FSP), a formal debate was hosted on the question of whether AI could be ethical without a human-in-the-loop.

The panel comprised CSIRO’s Dr Denis Bauer and Dr Rebecca Coates on the affirmative team, and Dr Bevan Koopman with the University of Toronto’s Associate Professor Anna Goldenberg on the negative team.

Dr Bauer leads CSIRO’s Transformational Bioinformatics Program in Health and Biosecurity and is an internationally recognised machine learning expert. Dr Coates is a sociologist and research scientist contributing to the Responsible Innovation FSP’s research on public perceptions of responsible innovation, she is also a human research ethics coordinator at CSIRO. Dr Koopman is a Senior Research Scientist who specialises in information retrieval and models for semantic search at CSIRO’s Australian e-Research Health Centre. Dr Goldenberg is an Associate Professor of computer science at the University of Toronto. She is also a senior scientist in the Genetics and Genome Biology program at SickKids Research Institute in Canada.

The debate, opened by Dr Bauer for the affirmative, highlighted the significant contributions of AI to making our lives safer and technologies more efficient. Examples included the use of AI for improved safety in passenger aircrafts and more accurate cancer detection in medical settings. Including a human-in-the-loop to guide AI decisions was considered important by Dr Bauer, or where necessary, the inclusion of a human veto to guard against machine failure or in complex situations with no clear ‘right’ or ‘wrong’ answer.

The opposition’s argument focused on the perceived efficiencies of AI to make better and more rational decisions than humans. Inadequacies in the human brain for making quick, unbiased decisions were raised by Dr Koopman and Dr Goldenberg, and the future potential capabilities of AI were raised as a reason why a human-in-the-loop for ethical AI would not be needed. Dr Koopman closed the argument with the statement that humans are not needed in-the-loop for every AI development and technology use as this would create more ethical issues than it would resolve.

Arguing against this position, Dr Coates proposed that not only was ethical AI impossible without a human-in-the-loop, it required context-driven decision frameworks and risk assessment tools to be used, alongside AI Ethics Guidelines. She stated that:

“…AI applications cannot provide an ethical decision in cases where there is no clear right or wrong answer, without a human.”

Citing complex scenarios like child safety as a case point, for example:

“…(decisions) where the life of a child may depend on the decision made by an AI tool of whether an intervention is required, need an expert human-in-the-loop to ensure this life-changing decision has the intended outcome of improved wellbeing.”

Dr Coates highlighted that AI tools can help solve complex environmental management problems, for example, on a remote Indigenous land where Traditional Owner knowledge and scientists-in-the-loop are needed to ensure that the decisions made cause no cultural harm and improve ecosystem health.

“In these scenarios, without human checkpoints, the use of AI would be unethical and would definitely not generate the intended outcomes… AI needs the application of context-dependent human-led decision-making frameworks throughout the AI lifecycle. These might be risk assessments to decide when and in what circumstances AI is used, or to help guide decisions in the tools’ implementation…”

She further argued that contextual ethical decision-making frameworks were the missing link between ethical AI principles and the real-life deployment of ethical AI.

“Someone also needs to be held accountable for the outcomes of the decisions of AI, and we can’t lay that responsibility on a machine – it needs to be a human. Looking ahead to the future, the stakes are too high to risk a wrong turn. We need humans-in-the-loop for ethical AI”.

Overall, the debate highlighted this complex and contested area and the wide-ranging perspectives that discussion of ethical AI can bring.

The full MLAI FSP Ethics debate can be viewed below.

CSIRO hosts a debate on the question of whether Artificial Intelligence could be ethical without a human-in-the-loop.