Leveraging the strengths of both human security experts and AI systems for more effective cybersecurity operations

The challenge

Working in cybersecurity can be a stressful job for human workers as automated monitoring systems generate large numbers of alerts which require attention. Distinguishing and prioritising the most significant threats from large volumes of warnings can be an overwhelming task, and, in practice, human analysts often ignore much of what is presented to them by the artificially intelligent cybersecurity systems they work with. Even when humans use automated systems, human knowledge and intelligence is still required given the constantly changing nature of cybersecurity threats. There is, therefore, a need to better design these systems to improve the collaboration between human experts and artificially intelligent algorithms in order to identify novel threats and better prioritise responses to the various alerts that are constantly being generated.

Our response

This project looks at how to make cybersecurity operations more effective by leveraging the strengths of both human security experts and AI systems. Instead of taking a human-in-the-loop approach to decision-making, it focusses on AI-in-the-loop to augment and improve human performance.


Cybersecurity is a vital issue for governments, organisations and individuals. So finding better ways to combine human and AI expertise will improve our ability to respond effectively to new and existing threats. The type of human-AI collaborative surveillance systems developed in this project can also inform many other domains which face similar problems, with human operators dealing with alerts from automated systems. Examples include the maritime surveillance and astronomy anomaly detection projects within the CINTEL Future Science Platform.