Project 3

August 10th, 2023

CyberFusion: AI Agents Stepping in for Humans

Project location:

Marsfield (NSW)

Desirable Skills:

Strong background in mathematical/statistical modelling

Programming experience in python, and knowledge in ML/AI algorithms and frameworks

Experience with cybersecurity domain is a plus 

Supervisory project team:

Ejaz Ahmed, Ronal Singh, Sarah Ali Siddiqui and Wei Kang

Contact person:

Senior Research Scientist, Data61

Project description:

Project Context and Problem Statement.

Computer systems are central to key national infrastructure across sectors like finance, manufacturing, healthcare, transport, and defence. However, they are vulnerable to threats from agile cyber-adversaries supported by powerful nation-states, whose capabilities evolve rapidly, necessitating equally swift responses. This project centres on synergizing advancements in artificial intelligence and autonomous reasoning with advanced security techniques to identify and rectify vulnerabilities, detect threats, attribute them to adversaries, and effectively mitigate and recover from attacks.
The lack of context in cybersecurity domain poses significant issues which hinders the detection of sophisticated threats that exploit subtle tactics, leaving organizations susceptible to targeted attacks. Additionally, a scarcity of human experts further compounds the problem, causing delayed incident responses and subjective decisions influenced by factors like skill gaps and stress. Furthermore, the evolving threat landscape and dataset limitations create inconsistencies. To address these challenges, a collaborative and unified agent-based learning framework is vital. This project explores synergies between humans and AI to ensure accurate and explainable decision-making, bridging the gap between human expertise and AI capabilities.

Project Deliverables.

We will develop novel approaches that leverage artificial intelligence, informed by and working with human experts in security operations, to perform security tasks rapidly and at scale. This project initiates a paradigm-shifting cybersecurity approach, wherein AI-powered intelligent security agents collaborate with humans across the cyber-defence life cycle, collectively enhancing the security posture of complex computer systems over time. Intelligent security agents will follow a new paradigm of continuous, lifelong learning both autonomously and in collaboration with human experts — supported by shared knowledge repositories comprising domain experts in security and comprehensive collection of AI tools. This project is in line with our fresh initiative alongside the Alan Turing Institute (ATI) in the UK, where researchers from ATI can also contribute their expertise gained from prior work in this field.


The deliverables/tasks of the project include:


(1) A unified and autonomous agent-based learning framework that engages human experts and their extensive knowledge base, alongside a comprehensive array of AI tools. This framework will facilitate continuous and lifelong learning within the security domain. Our learning framework will provide autonomous decision making at scale. In this unified and autonomous agent-based learning framework, human-machine collaboration plays a crucial role, as human experts can provide validation and feedback on the decisions made by the AI model. This collaborative approach ensures consistent and explainable decision-making processes, bridging the gap between human expertise and AI capabilities.

(2) Human-agent interaction to collaboratively learn to perform security tasks, and addressing situations of uncertainty or ambiguity, where human experts can contribute to resolve such cases. This includes reasoning and learning from human feedback that incorporates domain knowledge to assist the agent in making decisions when encountering unseen or new data.
Over time, the intelligent security agent will enhance its domain knowledge, becoming increasingly resilient and efficient in response to shifts in adversaries’ operational methods. They will construct defence strategies and tactical plans amidst uncertainties, collaborate with humans for mutually reinforcing teamwork, and adeptly adjust to unfamiliar and innovative attack scenarios.

Project Outcomes.


1. A simulated Proof of Concept (POC) tool that interacts with the security domain environment and makes real-time decisions.
2. High-impact academic publications.
3. Interact with the industry to test the deployment and applicability of the tool in real-world settings.

Candidates for Project 3 may also be assessed for Project 13.