We work amongst five interrelated research themes within the Collaborative Intelligence Future Science Platform:
- Collaborative processes and workflows: Where teamwork is required, having the best individuals does not guarantee success. Team members must complement each other, and the way in which they assign tasks and interact is crucial. The best players are not only masters of their own craft but also bring out the best in those around them. Intelligent collaboration requires new ways for human and machine agents to interact, drawing on computer, behavioural and organisational sciences. We must also potentially re-think or re-imagine existing workflows, to identify which aspects are best performed by the humans, which by the technology and which should be performed together. We will explore and re-imagine how humans and machines work and learn together, while ensuring meaningful and rewarding work for people.
- Collaborative communication: Humans and machines need to communicate with each other. The machine will need to explain its thinking in a way with which the human(s) can engage. This will require combining situation awareness and user needs with different modes of communication (e.g., visualisation, natural language processing) and potential constraints (such as cognitive load, or bandwidth issues). The machine will also need to respond to communication from the human, considering the history of what has been communicated and achieved to that point. Central to this is the development of mechanisms to coordinate the communication and build an ongoing communication context.
- Shared understanding: Shared understanding allows both humans and machines to understand more about each other and the world in which they operate, e.g., about current state, role, activities, intent, and knowledge. It exploits the benefits of fundamentally different human and machine understanding spaces and behaviour to develop robust, powerful joint situational awareness.
- Trusted technology: To achieve collaborative intelligence, humans need to trust the technology they are using, in terms of safety, accuracy, reliability, robustness and confidentiality of the process and data. We must understand: what constitutes ‘trust’ in which situations; how to establish, measure and maintain trust in technology; how different technologies should be designed for different situations and contexts to enable this trust. In addition, psychology and behavioural economics tell us that humans, especially decision makers, distrust ‘black boxes’, leading to poor technology adoption in some domains, while in other cases humans can over-trust systems they do not understand. We need to understand how to engender appropriate, calibrated trust.
- Human skills: Intelligently collaborating with machines might require specific skills. The FSP will consider which human skills and aptitudes most improve the tangible and intangible outcomes from collaboration, and how machines should be designed to best preserve human motivation. We will explore the implications of skills changes brought about by collaborative intelligence for workforce diversity and talent pools.