The ethics of robotic systems by Prof. Arkin

October 30th, 2018

Robots are here and they will have a greater and greater impact on society. We need to discuss the manifold ethical questions they pose to society.

Back in September, Prof. Arkin from Georgia Tech flew to Brisbane to attend the D61+ Live, our annual science showcase event. As one of our keynote speakers, he ron arkin csiro georgia techexplored some facets around the ethics of robotics and AI.

Below is a sneak-peak interview shared by the Algorithm Editorial Team.

Hi Professor Arkin, how’s it going?

Good. Looking forward to returning to Brisbane!

Can you give us a quick overview of your talk at the D61+ Live?

Sure – it will be addressing a series of issues regarding the ethical aspects of human-robot interaction, from warfare to healthcare to deception to robotic nudging. I will describe our research in these areas and potential software architectures to address some of the many issues associated with the proliferation of robotics in everyday life. The talk will be accessible to the public, not overly technical, as these details can be found in our numerous publications available on the internet,

You’ve done a lot of research into robot deception. What kind of circumstances could arise in which a robot might be allowed or even designed to deceive a human?

Certainly in warfare. Sun Tsu stated in the Art of War that all warfare is deception. But we also explore its use for other-deception, i..e., deception for the benefit of the one being deceived. This can occur in education, in health or trauma care, even in everyday life. I’ll touch on several examples and draw on a model from criminology to illustrate how other deception might be done.

What kind of civil rules might robots have to learn and adhere to, if they’re to integrate successfully into human society, and what challenges might they face during this process?

All sorts. Driverless cars for example, will have to know the rules of the road and the laws of the region in which they operate. Autonomous weapons should be provided with a respect for International Humanitarian Law. Even a childcare or eldercare robot will need to have some common sense regarding what is acceptable in its behaviour and what isn’t.

Often, discussion around artificial intelligence bounces off misconceptions or misunderstandings. What’s one example that keeps coming up for you, that you’d love to see refuted?

Good question. Perhaps the confusion of lethal autonomous weapons with science fiction. How many articles have you seen that show the Terminator when discussing these systems? No one wants or is talking about building such science fiction artefacts.

In general a responsible AI scientist or roboticist must also take care in expectation management: not to overhype what can or will be done. The singularity (the hypothesised point when machine intelligence exceeds human intelligence) is one such overhyped threat, which to me at this time poses no real danger. There are far more pressing issues. but I am glad that some folks are thinking about it – but no need to scare everyone.

Do you have any examples of the depiction of robotics in science fiction that you think has been done particularly well?

I admire Asimov for bringing up the ethical quandaries associated with robotics in his Three Laws (there are actually four). But this was a literary device that illustrated what can go wrong, and perhaps too many people take these laws literally. There are other minor instances – such as a scene in Interstellar that talks about robot deception.

 

Any key takeaway messages or big points you’d like people to be thinking about?

Robots are here and they will have a greater and greater impact on society. We need to discuss the manifold ethical questions they pose to society. Technological advances are outpacing our ability to regulate and legislate. Everyone is a stakeholder in this discussion so my hope is that more people will get engaged. It’s our future we’re talking about.

Prof. Arkin is also part of our DARPA SubT Challenge team, as we compete for US$2 million prize in a competition funded by the Defense Advanced Research Projects Agency (DARPA).