Learning from Human-Collaborator Actions

A project led by Yanran Jiang, and supervised by Cecile Paris, Dana Kulic, David Howard, Jason Williams and Pavan Sikka

The primary objective of this project is to improve an operator’s situational awareness and increase robot transparency through well-designed communication between humans and robots. A data-driven model of human-robot teaming that takes into account the dynamics of human-robot interaction and the preferences of each team member will be developed. This project aims to: (1) derive a new joint ‘Team’ feature that captures the dynamic situational awareness and represents the joint sensory information from humans and robots in human-robot interaction; (2) model team behaviour with changes in the environment and human preferences; (3) introduce a self-confidence module to existing robot autonomy system, enabling human and robot to question each other if conflicts occur.

To accomplish these objectives, the project focus on the collection of a human intervention dataset within a simulated environment. This dataset will consist of specific scenarios that trigger operator intervention, such as providing waypoints to guide the robot through narrow corridors. By observing how the robot responds to human intervention and assessing the final team performance, valuable insights can be gained. The dataset will be used to train a model that can learn to distinguish between good and bad human interventions, aiding in the improvement of dynamic situational awareness and the development of effective human-robot teaming strategies.