The Human Centric AI Seminars Series

The Human Centric Security team are running a new monthly series “The Human Centric AI Seminars” that will focus on various research topics in human centered AI.
For more info contact: Kristen Moore and Tina Wu
Free access to anyone interested in Humans and AI

Next seminar:

If you have missed the last one:

Date: Tuesday May 22nd at 4pm AEST

Title: Exploring the evolution of human-machine interaction from a “machine behaviour” perspective

Speaker: Anne-Marie Nussberger

Abstract: (How) Might intelligent machines influence the trajectory of cultural evolution? My seminar talk will map out a theoretical framework that we have developed to explore this conundrum and highlight how it guides our experimental work. Exemplifying our approach, I will present an experimental study in which we corroborated theoretical conjectures and anecdotal evidence for humans learning and preserving an adaptive strategy from machines that would be practically inconceivable for humans on their own. I look forward to discussing ways in which our methodological framework could be leveraged to address questions of interest to CSIRO’s Data61 , for instance to inform the intentional design of intelligent machines.

Short bio: Anne-Marie Nussberger is a Postdoctoral Fellow at the Center for Humans and Machines, Max Planck Institute for Human Development in Berlin. Her research combines approaches from psychology, behavioural economics, and cultural evolution to understand how intelligent machines influence human beliefs, values, and behaviour.


  • When: April 4th 10 am AEDT

Speaker: Adam Perer https://perer.org/

Recording:  Human Centered AI and Cybersecurity Seminar at CSIRO’s Data61-20240404_100123-Meeting Recording.mp4

Title: Auditing, Collaborating and Explaining AI to Support Human-AI Decision MakingAbstract:  Human-AI collaboration to support decision-making can take many forms and involve various stakeholders. I will showcase how Human-AI collaboration has relevance to model developers, domain experts, and end-users.

AI models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. First, I will introduce Zeno, an interactive platform that lets model developers audit, discover and validate behaviors across AI systems. I will share stories and examples of how Zeno led to discoveries on real-world models.

Then, I will focus on a particular high-stakes decision-making use case: an AI system to assist with sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. We developed a novel decision support interface, AI Clinician, that provides explanations alongside treatment recommendations. The patterns we extracted from studying human-AI collaboration in this high-stakes domain reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.

Finally, I will talk about the impact of imperfect explainable AI on human-AI decision-making. Explainability techniques are rapidly being developed to improve decision-making across various cooperative work settings. However, explanations are imperfect by definition or implementation. I will describe how imperfect explanations influence humans’ decision-making behavior in a bird species identification task, including reliance on AI and human-AI team performance. I will then conclude about future research directions for improving human-AI collaboration in decision-making.

Bio: Adam Perer is an Assistant Professor at Carnegie Mellon University, where he is a member of the Human-Computer Interaction Institute and co-leads the Data Interaction Group. His research integrates data visualization and machine learning techniques to create interactive visual systems that help users make sense of big data. Lately, his research has focused on human-centered data science and extracting insights from clinical data to support data-driven medicine. This work has been published at premier venues in visualization, human-computer interaction, and medical informatics. He holds a Ph.D. in Computer Science from the University of Maryland, College Park.


  • When: March 13th 1pm AEDT

Speaker: Thanh Thi Nguyen https://research.monash.edu/en/persons/thanh-thi-nguyen

Recording: https://csiro.webex.com/csiro/ldr.php?RCID=4fe42b81f59a99f451c7da7477662c83

Title: Detection of online harmful content using Llama 2 large language models

Abstract:  Detecting online harmful content on social media platforms has become a critical area of research due to the growing concerns about online safety, especially for vulnerable populations such as children and adolescents. This talk presents an approach to detection of online harmful content, specifically abusive language and sexual predatory behaviours, using the open-source pretrained Llama 2 7B-parameter model, released by Meta GenAI. We fine-tune the LLM using datasets with different sizes, imbalance degrees, and languages. This study’s outcomes indicate that the proposed method can be implemented in real-world applications (even with non-English languages) for flagging sexual predators, offensive or toxic content, hate speech, and discriminatory language in online discussions and comments to maintain respectful internet or digital communities. Furthermore, it can be employed for other problems such as sentiment analysis, spam and phishing detection, sorting legal documents, fake news detection, language identification, text-based product categorization, medical record analysis, and resume screening.

Bio: Dr. Nguyen is an associate professor at the Ai for Law Enforcement and Community Safety (AiLECS) lab at Monash University.  He was a Visiting Scholar with the Computer Science Department at Stanford University in 2015 and the Edge Computing Lab at Harvard University in 2019. He received a European-Pacific Partnership for ICT Expert Exchange Program Award from the European Commission in 2018, and an Australia–India Strategic Research Fund Early- and Mid-Career Fellowship from the Australian Academy of Science in 2020. He obtained a PhD in Mathematics and Statistics from Monash University, Australia and has expertise in artificial intelligence, reinforcement learning, NLP, computer vision, and cybersecurity.


  • Date: Wednesday December 13th at 10am (AEDT)

Speaker: Dr. Julie Haney (Human Centered-Cybersecurity Program Lead at National Institute of Standards and Technology)

Recording: https://csiro.webex.com/csiro/ldr.php?RCID=fc1b737c0c9b7ebfdf56fde96f22598b

Title: Tradeoffs, Transparency, and Shared Responsibility: Exploring Users’ Perceptions of Smart Home Security and Privacy in the U.S.  

Abstract: Smart home technology may expose adopters to increased risk to network security, information privacy, and physical safety. However, users of this technology may lack understanding of the privacy and security implications, and manufacturers often fail to provide transparency and configuration options. These shortfalls may result in little meaningful action to protect users’ security and privacy. This talk synthesizes insights from three NIST research studies focused on the security and privacy perceptions, concerns, actions, and challenges of U.S. smart home users and how those may sometimes vary across smart home device categories. Further discussion will offer suggestions on how these insights can inform government, manufacturer, standards, and consumer advocacy efforts to better meet the security and privacy needs of smart home users while improving overall security and privacy outcomes.

Bio:  Julie Haney is a computer scientist and lead for the Human-Centered Cybersecurity program at the U.S. National Institute of Standards and Technology (NIST). She conducts research about the human element of cybersecurity, including the usability and adoption of security solutions, work practices of security professionals, and people’s perceptions of privacy and security. She has been an invited speaker at numerous cybersecurity forums spanning industry, government, and academia, and has published peer-reviewed and invited articles in both research and practitioner publications. Before joining NIST in 2018, Julie spent over 20 years working in the U.S. Department of Defense as a cybersecurity professional and technical director. She has a PhD in Human-Centered Computing and an M.S. and B.S. in Computer Science.


  • Date: October 25 at 4-5pm AEDT

Speaker: Dr Stefan Sarkadi

Title: Understanding Deception in Hybrid Societies

Abstract: Deception is becoming an increasingly complex socio-cognitive phenomenon that is difficult to detect and reason about. My research tackles the integration of techniques from AI and deception analysis to generate narratives about multi-agent interactions in complex systems in order to help intelligence analysts perform inference to the best explanation. In this presentation I will talk about how to explain deception as a complex phenomenon in multi- agent systems by considering several aspects. The first one is the modelling of cognitive factors underlying deception such as Theory-of-Mind as part of AI architectures and reasoning mechanisms. The second aspect is how an evolutionary arms race in Theory-of- Mind emerges between artificial agents that deceive versus the ones that aim to detect deception. The third aspect regards humans’ perception of working with deceptive AI in future-of-work scenarios. Finally, I discuss how to use storytelling and argumentation to make sense of interactions between agents of hybrid societies where humans and machines engage in social interactions.

Bio: Stefan Sarkadi is a Research Fellow at King’s College London funded by the Royal Academy of Engineering through the RAEng UK IC Postdoctoral Research Fellowship Scheme. He is also an Associate Researcher at Inria, where he is a member of the Wimmics and Hyper-Agents groups. Previously, Stefan was a 3iA Postdoctoral Research Fellow at Inria and 3iA Côte d’Azur, and a Postdoc in the HASP lab at King’s. Before that, he finished his PhD in AI at King’s, during the time in which he was also a visiting PhD researcher at the MIT Media Lab in the Scalable Cooperation Group. Stefan’s background is multidisciplinary, built on a PhD in Computer Science from King’s College London, a Master’s in Cognitive Science from the University of Edinburgh, and a Bachelor’s in Philosophy from the West University of Timisoara. Broadly, his areas of specialisation in AI include multi-agent systems, agent based modelling, and knowledge representation and reasoning. As an inherently curious individual and a highly interdisciplinary researcher, Stefan aims to understand the complex reasoning and behaviour of intelligent agents (humans or machines) inside social environments like hybrid societies. where humans, machines, and everything in-between interact. In particular, he is interested in the topics of deception and deception detection, self-explainable AI agents with Theory-of-Mind, and the ability of AI agents to build stories and narratives.


  • Date: September 27 at 1-2pm AEST

Speaker: Dr Patrick Scolyer-Gray

Recording: Not available

Title: Sociological Insights in the Age of AI: Unveiling Cybersecurity’s Blind Spots

Abstract: AI and cybersecurity have their roots in the physical sciences, and this is partly why valuable ideas and insights derived from the behavioural and social sciences are frequently overlooked. Sociological theories of modernity, for example, provide a wealth of useful analytical tools that are worthy of consideration. Theories of ‘Late Modernity’ help us better understand how contemporary society’s inherent focus on the individual, as well as an ever increasing state of fluidity throughout one’s life trajectory, has further entrenched and augmented pre-existing ideas about personal responsibility. This talk will discuss some of the problems generated by the collision of technologies like generative AI with conditions of late modernity as well as the ensuing implications for decision making and behaviour relating to cybersecurity. Three thematic clusters (cyber indifference, cyber literacy and security culture) of contemporary cybersecurity challenges raised by AI will be traced to their relationships with today’s societal context of late modernity. The discussion will conclude with some recommendations for how methods and principles of Human-Centric Cybersecurity (HCCS) can be employed to reduce cyber vulnerabilities at both individual and societal levels.

Bio: Dr Scolyer-Gray is a cyber-sociologist who investigates what, how and why people think and do what they do. By deploying a mixture of methods, concepts and theories drawn from both behavioural and physical sciences, Dr Scolyer-Gray identifies the security implications of human behaviour and cognitive processes to develop solutions to the vulnerabilities and threats he finds.  Having recently published his first monograph, Dr Patrick Scolyer-Gray’s career has been devoted primarily to the design, development and implementation of Human-Centric Cybersecurity (HCCS); A methodological framework that extends and improves upon the conventional techno-centric ‘layered’ approach to cybersecurity. In his current role, Patrick leads the Human-Centric Cybersecurity (HCCS) consulting practice at the Expert Management Agency (EMA) 460degrees. Throughout his career, Patrick has maintained that the key to achieving a stronger security posture lies not in technology, but instead in social and cognitive elements, such as culture, decision-making and behaviour.


  • Date: August 30 at 1-2pm AEST

Recording: https://webcast.csiro.au/#/videos/f6e64f5e-3c70-4b8b-a14b-5f1d3bf014e1

Speaker: Assoc. Prof. Michael Bernstein https://hci.stanford.edu/msb/

Title: Generative Agents: Interactive Simulacra of Human Behavior

Abstract: Believable proxies of human attitudes and behavior can empower interactive applications ranging from immersive environments to social policy simulations to improved content moderation tools. I will illustrate this concept through generative agents: computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. Extending this concept is jury learning: an AI architecture intended for tasks that feature substantial disagreement between people, which resolves these disagreements explicitly through the metaphor of a jury: defining which people or groups, in what proportion, determine the classifier’s prediction.

Bio: Michael Bernstein is an Associate Professor of Computer Science at Stanford University, where he is a Bass University Fellow and STMicroelectronics Faculty Scholar. His research in human-computer interaction focuses on the design of social computing systems. This research has won best paper awards at top conferences in human-computer interaction, including CHI, CSCW, ICWSM, and UIST, and has been reported in venues such as The New York Times, Science, Wired, and The Guardian. Michael has been recognized with an Alfred P. Sloan Fellowship, UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. He holds a bachelor’s degree in Symbolic Systems from Stanford University, as well as a master’s degree and a Ph.D. in Computer Science from MIT.


  • Date: August 9th 3pm AEST

Speaker: Prof. Wojciech Samek

Recording: Link

Title: “Accessing the Hidden Space of Models with Explainable AI“

Abstract: “The emerging field of Explainable AI (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. This talk will present Concept Relevance Propagation (CRP), a next-generation XAI technique which explains individual predictions in terms of localized and human-understandable concepts. Other than the related state-of-the-art, CRP not only identifies the relevant input dimensions (e.g., pixels in an image) but also provides deep insights into the model’s representation and the reasoning process. This makes CRP a perfect tool for AI-supported knowledge discovery in the sciences. In the talk we will demonstrate on multiple datasets, model architectures and application domains, that CRP-based analyses allow one to (1) gain insights into the representation and composition of concepts in the model as well as quantitatively investigate their role in prediction, (2) identify and counteract Clever Hans filters focusing on spurious correlations in the data, and (3) analyze whole concept subspaces and their contributions to fine-grained decision making. By lifting XAI to the concept level, CRP opens up a new way to analyze, debug and interact with ML models, which is of particular interest in safety-critical applications and the sciences”

Bio: “Wojciech Samek is a professor in the EECS Department of the Technical University of Berlin and is jointly heading the AI Department at Fraunhofer Heinrich Hertz Institute. He hold a Masters degree in computer scientisc and received the PhD degree with distinction from the Technical University of Berlin in 2014. Dr. Samek is associated faculty at the BIFOLD – Berlin Institute for the Foundation of Learning and Data, the ELLIS Unit Berlin, the DFG Research Unit DeSBi, and the DFG Graduate School BIOQIC, and member of the scientific advisory board of IDEAS NCBR . Furthermore, he is a senior editor of IEEE TNNLS, an editorial board member of Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee and the Germany’s Platform for Artificial Intelligence. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award and the 2022 Digital Signal Processing Best Paper Prize, and part of the expert group developing the ISO/IEC MPEG-17 NNC standard. He is the leading editor of the Springer book “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” (2019), co-editor of the open access Springer book “xxAI – Beyond explainable AI” (2022), and organizer of various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Dr. Samek has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed as ESI Hot (top 0.1%) or Highly Cited Papers (top 1%).”


  • Date and Time: Wed June 28th at 10-11 am

Recording: link

SpeakerDr. Emilee Rader (Michigan State University)

TitleAn Argument for Limiting Collection of Data

Abstract: “Notice and choice,” the dominant model for governing digital data collection and use, assumes that if    proper transparency is provided people will only use platforms that have data practices they agree with. But, in reality, widespread data collection and use of machine learning enables inferences to be generated that are hard for people to anticipate. This talk describes my recent work focused on people’s beliefs and expectations about what data are collected about them and how those data are used, and makes an argument about how specific limits on data collection and inferences could help people make privacy choices that are more aligned with their preferences. 

Bio: Dr. Emilee Rader is an Associate Professor in the Department of Media and Information at Michigan State University. She studies how people reason and make choices about data collection and inferences enabled by digital technologies, to better understand why people struggle to manage their privacy, and to discover new ways to help people gain more appropriate control over information about them. Dr. Rader earned her PhD from the University of Michigan School of Information and spent two years at Northwestern University in the Department of Communication Studies, where she was a recipient of the highly competitive Computing Innovation post-doctoral fellowship award from the Computing Research Association. She also has a professional Master’s degree from the Human Computer Interaction Institute at Carnegie Mellon University, and worked with an interdisciplinary team of researchers at Motorola Labs designing and evaluating applications for mobile technologies. Her work has been funded by several grants from the National Science Foundation, and she primarily publishes in usable privacy and security and human-computer interaction venues.


  • Date: June 7th at 1pm AEST

Title: Anomaly Detection for Computer Systems via Machine Learning

Speaker: Min Du (Principal Researcher at Palo Alto Networks)

Abstract:  Computers are very vulnerable to problems like software bugs and attacks, while at the same time generating a lot of data such as performance counter values and system logs. This inspires us to explore data-driven machine learning techniques for system anomaly detection. We seek to achieve online and real-time detection, and tackle a major challenge in system data analysis, where the data may only contain zero or few positive labels. Previously, we have proposed a general online anomaly detection approach for discrete computer sequence data, and further extended it to do lifelong anomaly detection with the ability of incremental unlearning, as well as robust anomaly detection improved by differential privacy. These techniques have been successfully applied to various domains including system log anomaly detection, infected virtual machine detection, Android malware detection, machine learning backdoor attack detection and spam email detection. In this talk, I’ll mostly focus on the machine learning techniques we have explored, and briefly mention the tasks that have been and could potentially be benefited.

Bio: Min Du is now a Principal Researcher at Palo Alto Networks. Prior to that she was a postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at UC Berkeley, advised by Professor Dawn Song. She obtained her Ph.D. degree in Computer Science from the University of Utah, advised by Professor Feifei Li. She has done research related to various security and privacy aspects in the areas of machine learning, blockchain, and systems, with a special focus on anomaly detection. Her current research focuses on malware detection and malware data analysis using deep learning, as well as the usages of large language models in the cybersecurity domain. She was a pioneer to apply deep learning in the system log domain. Her work has been published in top computer science venues including ACM CCS, USENIX SECURITY, ICLR, and ACM SIGMOD. She received a best poster award at SoCC 2017, and a best paper award at ICPE 2022. She was also selected as one of the Rising Stars in EECS in 2019.


TitleSynthetic Lies: Misinformation in the Age of Large Language Models

Abstract: Over the past decade, large language models (LLMs) have rapidly evolved, demonstrating remarkable capabilities in generating texts that are almost indistinguishable from human-written content, and in some cases, even perceived to be more credible. As LLM tools like ChatGPT increasingly penetrate public discourse, it is critical to understand the potential risks posed by their scalability, effectiveness, and customisability. This talk presents our research on examining the characteristics of AI-generated misinformation compared to human-created misinformation. Our work also evaluates the applicability of two common misinformation solutions: detection models and assessment guidelines. By highlighting the challenges posed by AI-generated misinformation, I will conclude by discussing implications for the future development of intervention strategies, detection models, and responsible design of LLM technologies. 

Bio: Jiawei Zhou is a PhD student in Human-Centered Computing at the Georgia Institute of Technology, specializing in Human-AI Interaction and Social Computing. She adopts a theory-guided approach using quantitative and qualitative methods to understand the impacts of collective narratives (such as misinformation, hate speech, and counterspeech) and the role of generative AI in addressing or exacerbating related societal challenges. In particular, her word addresses real-world challenges such as harmful content, responsible use of language models, and social support for vulnerable groups. Her research has been published in top-tier computer science venues including ACM CHI, CSCW, UbiComp/IMWUT, and IEEE ICHI. She has received a paper award at CHI and has been supported by grants from NSF, CDC, and NIH.


  • Date: Wed March 15th at 10am-11am
Speaker: Serge Egelman (UC Berkeley)

Recording: link

TITLE: Taking Responsibility for Someone Else’s Code: Studying the Privacy Behaviors of Mobile Apps at Scale

ABSTRACT: Modern software development has embraced the concept of “code reuse,” which is the practice of relying on third-party code to avoid “reinventing the wheel” (and rightly so). While this practice saves developers time and effort, it also creates liabilities: the resulting app may behave in ways that the app developer does not anticipate. This can cause very serious issues for privacy compliance: while an app developer did not write all of the code in their app, they are nonetheless responsible for it. In this talk, I will present research that my group has conducted to automatically examine the privacy behaviors of mobile apps vis-à-vis their compliance with privacy regulations. Using analysis tools that we developed and commercialized (as AppCensus, Inc.), we have performed dynamic analysis on hundreds of thousands of the most popular Android apps to examine what data they access, with whom they share it, and how these practices comport with various privacy regulations, app privacy policies, and platform policies. We find that while potential violations abound, many of the issues appear to be due to the (mis)use of third-party SDKs (i.e., supply chain problems). I will provide an account of the most common types of privacy and security issues that we observe and how app developers can better identify these issues prior to releasing their apps.

BIO: Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also CTO and co-founder of AppCensus, Inc., which is a startup that is commercializing his research by performing on-demand privacy analysis of mobile apps for developers, regulators, and watchdog groups. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and the Spanish Data Protection Authority’s Emilio Aced Personal Data Protection Research Award. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.


  • Date: Wed Feb 15th at 1pm
 
Speaker: Vassilis Kostakos. Professor of Human-Computer Interaction at Uni Meblourne.  https://people.eng.unimelb.edu.au/vkostakos/

 
Title: What smartphones can tell us about human behaviour
 
Abstract:  In this talk I will present our group’s research on studying human behaviour using smartphones. We have developed a platform (AWARE Light) that makes it easy to collect behavioural  data from smartphones. I will give an overview of how we conduct our research, and give numerous examples of the kinds of insight that we can obtain. Smartphones and other personal technologies have the potential to help us understand the nuances of human behaviour systematically and at a large scale.
Bio: Vassilis Kostakos is a professor of computer science at the University of Melbourne in Australia. He works on ubiquitous computing, human-computer interaction, social computing, and the Internet of Things.  His research focuses on how to use sensor data to understand people’s behaviour, and how to develop everyday technologies that better understand and better respond to humans.

  • Date: 10-11am Nov 30 AEDT
 
Speaker: Professor Debi Ashenden
 
Title: Exploring the Socio-Technical Issues of MLOps

Abstract: New technology has the potential deliver a step change for defence and national security, but it comes with threats as well as opportunities.  The successful delivery of such technology will depend as much on the socio-technical issues around the design, development, and deployment of software as it will on the technology itself.  Modern software development processes such as DevSecOps take advantage of tools and processes that facilitate agile ways of working, continuous integration and delivery, and the development of secure code.  But to be effective DevSecOps also requires trust and a change in culture.  This talk charts previous research that has explored the social practice of software developers to better understand how fracture points in their relationships with cyber security practitioners can impact security risk. When a DevSecOps project succeeds it is because working relationships between security and software development activities are underpinned by mutual trust.  When trust is lacking the process suffers: software developers and security practitioners don’t engage early enough, insufficient time is available to implement security, and an incomplete view is formed of security risks.  MLOps adds to the complexity of security issues in DevSecOps as data scientists interact with the software development process.  This talk outlines research that aims to better understand the social practice of data scientists in the MLOps process.  Understanding these social practices will help us identify potential vulnerabilities in MLOps that could lead to an increase in cyber security risk.

Bio: Debi holds the DST Group-University of Adelaide Chair in Cybersecurity. In addition, she is Professor of Cyber Security at the University of Portsmouth and a visiting Professor at Royal Holloway, University of London. Debi’s research interests are in the social and behavioural aspects of cybersecurity – particularly in finding ways of ‘patching with people’ as well as technology. She is currently researching transdisciplinary approaches to modelling complex warfighting, how to fuse behavioural science with cyber deception, and the socio-technical aspects of designing complex military systems. Debi was previously Head of the Centre for Cyber Security at Cranfield University at the Defence Academy of the UK and was a member of the UK MOD’s Defence Science Expert Committee. She has worked extensively across the public and private sector for organisations such as UK MOD, GCHQ, Cabinet Office, Home Office, Euroclear, Prudential, Barclaycard, Reuters and Close Bros. She has had a number of articles on cyber security published, presented at a range of conferences and co-authored a book for Butterworth Heinemann, Risk Management for Computer Security: Protecting Your Network & Information Assets.


  • Date: Wednesday 2 Nov 2022 1.00pm AEDT

Speaker: Feng Xia (Federation University Australia)

Recording: https://csiro.webex.com/csiro/ldr.php?RCID=57f38a3b06b7f3ab1f8f18993faedb3d

Title: Towards Trustworthy Graph Learning

Abstract: Graphs (or networks) are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs or graph machine learning) is gaining huge attention from both researchers and practitioners. Graph learning proves effective for many tasks in real-world applications, such as regression, classification, clustering, matching, and ranking. Over the past few years, a lot of graph learning models and algorithms (e.g., graph neural networks, network embedding, network representation learning, etc.) have been developed. Nevertheless, the field of graph learning is facing many challenges deriving from, e.g., fundamental theory and models, algorithms and methods, supporting tools and platforms, and real-world deployment and engineering. This talk will give an overview of the state of the art of trustworthy graph learning, paying special attention to relevant trends and challenges. Some recent advancements in this field will be showcased.

Bio: Dr. Feng Xia is currently an Associate Professor in Institute of Innovation, Science and Sustainability, Federation University Australia. He was a Full Professor and Associate Dean of Research in School of Software, Dalian University of Technology (DUT), China. He is/was on the Editorial Boards of over 10 int’l journals. He has served as the General Chair, Program Committee Chair, Workshop Chair, or Publicity Chair of over 30 int’l conferences and workshops, and Program Committee Member of over 90 conferences. Dr. Xia has authored/co-authored two books, over 300 scientific papers in int’l journals and conferences (such as IEEE TAI, TKDE, TNNLS, TC, TMC, TPDS, TBD, TCSS, TNSE, TETCI, TETC, THMS, TVT, TITS, TASE, ACM TKDD, TIST, TWEB, TOMM, WWW, AAAI, SIGIR, CIKM, JCDL, EMNLP, and INFOCOM) and 3 book chapters. He was recognized as a Highly Cited Researcher (2019) by Clarivate Analytics (Web of Science). Dr. Xia received a number of prestigious awards, including IEEE DSS 2021 Best Paper Award, IEEE Vehicular Technology Society 2020 Best Land Transportation Paper Award, ACM/IEEE JCDL 2020 The Vannevar Bush Best Paper Honorable Mention, IEEE CSDE 2020 Best Paper Award, WWW 2017 Best Demo Award, IEEE DataCom 2017 Best Paper Award, IEEE UIC 2013 Best Paper Award, and IEEE Access Outstanding Associate Editor. His research interests include data science, artificial intelligence, graph learning, anomaly detection, and systems engineering. He is a Senior Member of IEEE and ACM, and ACM Distinguished Speaker.


  • Date: Wed Oct 26th at 4pm

Recording: https://csiro.webex.com/webappng/sites/csiro/recording/7703df7a3719103bae79005056811b40/playback

Speaker: Yisroel Mirsky

Title: The Threat Horizon of Deepfakes

Abstract: Deep learning has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. Since 2018, deep learning has been used to re-enact people in `deepfakes’ not only for entertainment but for revenge, fraud, and espionage as well. With rapid advances in generative AI and the ease of access to the technology, we wonder what is on the horizon regarding malicious deepfakes: what will attacks look like in the near future and how will we prevent them? In this talk, we will talk about different types of deepfakes (e.g., human face/voice, medical records, …), how they are made, detected, and their caveats. We will also look into an imminent threat which has recently emerged and give insight into the matter.

Bio: Yisroel Mirsky is a tenure-track lecturer and Zuckerman Faculty Scholar in the Department of Software and Information Systems Engineering at Ben-Gurion University. He received his Ph.D. from BGU in 2018 and was a postdoctoral fellow for two years in the at the Georgia Institute of Technology. He currently heads the Offensive AI research lab in BGU https://ymirsky.github.io/Offensive.AI.Lab/ . His main research interests include deepfakes, adversarial machine learning, anomaly detection, and intrusion detection. Dr. Mirsky has published his work in some of the best security venues: USENIX, CCS, NDSS, Euro S&P, Black Hat, DEF CON, RSA, CSF, AISec, etc. His research has also been featured in many well-known media outlets: Popular Science, Scientific American, Wired, The Wall Street Journal, Forbes, and BBC. Some of his works, include the exposure of vulnerabilities in the US 911 emergency services and research into the threat of deepfakes in medical scans, both featured in The Washington Post.


  • Date: Wednesday 10 August 2022 at 10-11am AEST

Speaker: Dr. Elissa M. Redmiles

Title: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/9d71db0afa6d103a97ff00505681cdcd/playback

Abstract: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users. This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.


  • Date: June 15th at 4 pm

Speaker: Professor Phil Morgan; Director of the Cardiff University Human Factors Excellence (HuFEx) Research Group; Director of Research – Cardiff University Centre for AI, Robotics and Human-Machine Systems, School of Psychology, Cardiff University, Cardiff, UK; Technical Lead – Airbus Accelerator in Human-Centric Cyber Security

Title: A Human Factors Approach to Optimising Humans in Cyber Security

Recording https://csiro.webex.com/recordingservice/sites/csiro/recording/d761992fce9e103abbf6005056818c0c/playback

Abstract: There is abundant evidence that suboptimal human thinking and behaviour is linked to ‘successful’ cyber security incidents. In fact, people are often described as the weakest link in cyber security. This rather damning evidence might suggest that software and hardware solutions are the only way to combat cyber attackers and their methods but, and perhaps counterintuitively, I will argue against this technical only approach and forsocio-technical solutions. Through a psychological and Human Factors data driven understanding of our cyber security awareness, knowledge, attitudes, and motivations both within academia and industry – my teams and I have identified most of the factors that can lead to cyber risky behaviours as well as a range of interventions to combat them. During my talk, I will first give an overview of key human cyber vulnerabilities exploited by cyber attackers – from weapons of influence to weaknesses in our understanding of cyber security language and communication. I will then give an overview of some of our gold standard cyber vulnerability and strengths tools from which we have developed metrics, personas and other interventions to effectively combat human cyber risky behaviours. My proposition is that humans can actually be the strongest line of defence in cyber security especially when there is an optimal symbiosis with software (and hardware) solutions developed ‘with’ and ‘for’ us rather than simply with us in mind.

Bio: Prof Phillip Morgan BSc DipRes PhD PGCHE FHEA AFALT AFBPS holds a Personal Chair in Human Factors and Cognitive Science within the School of Psychology at Cardiff University. He is Director of the Human Factors Excellence Research Group (HuFEx) and Director of Research for the Centre for AI, Robotics and Human-Machine Systems (IROHMS). He is an international expert in Cyberpsychology, intelligent-mobility (focus on autonomous vehicles), HMI design, HCI, and interruption/distraction effects. He has been awarded >£20M funding (>£10M direct) across >30 funded grants from e.g., Airbus, ERDF, EPSRC, ESRC, HSSRC IUK, DHC-STC, GoS, SOS Alarm, and the Wellcome Trust, and has published >100 major papers and reports. Phil works on large-scale projects funded by Airbus, where he is seconded (since 2019), part-time, as Technical Lead in Cyber Psychology and Human Factors and Head of the Airbus Accelerator in Human-Centric Cyber Security (H2CS). Prof Morgan is UK PI on an ESRC-JST project (2020-24) (with collaborators at e.g., Universities of Kyoto and Osaka) on the Rule of Law in the Age of AI and autonomous systems with a key focus on blame assignment and trust in autonomous vehicles with XAI and HRI as core interventions. He is currently working on two HSSRC (UK MOD / Dstl / BAE Systems) projects examining HF guidelines for autonomous systems and robots (with QinetiQ & BMT Defence) and complex sociotechnical systems (with Trimetis). He also works on two projects funded by the NCSC focussed on interruptions effects on cyber security behaviours. Prof Morgan has recently completed a project on XAI funded by Airbus. Together with Prof Dylan M Jones OBE – Prof Morgan overseas the IROHMS Simulation Laboratory based within the School of Psychology at Cardiff University that currently comprises five state-of the art zones: immersive dome; transport simulator; cognitive robotics; VR/AR; and a command and control centre (under development).


  • Date: April 27th at 1pm AEST

Speaker: Ganna Pogrebna

Recording: NA

Title: The Behavioural Data Science Approach to Cybersecurity

Abstract: Recent advances in artificial intelligence allow us to design new “hybrid” models merging behavioural science and machine learning algorithms. In this talk, I will showcase several recent projects which use a hybrid methodology of behavioural data science to (i) understand people’s risk taking and risk perceptions in cyber spaces; (iii) segment and detect adversarial behaviour; as well as (iii) predict potential targets. The talk will explain the mechanism and potential behind such models using several use cases. It will also demonstrate additional insights which such models deliver beyond traditional machine learning and usual behavioural science methods. Specifically, the talk will show how behavioural data science approach can generate more accurate predictions of human behaviour and help to deliver better organizational outcomes. The talk will also explain how hybrid modelling can help in identification of cybercriminals as well as in using behavioural segmentation to create cybersecurity social marketing campaigns for the general public.

Bio: Ganna Pogrebna is Executive Director of Cyber Security and Data Science Institute at Charles Sturt University and Honourary Professor of Business Analytics and Data Science at the University of Sydney. She is also ESRC-Turing Fellow and Lead for Behavioural Data Science at the Alan Turing Institute in the UK. Her research is on behavioural change for digital security. Ganna’s work was funded by ARC, ONI, NCSC, ESRC, EPSRC, Leverhulme Trust and industry. She is an author of a book for practitioners on cyber security as a behavioural science – “Navigating New Cyber Risks” – as well as blogger at https://www.cyberbitsetc.org/. She published extensively on human behaviour and cyber security in peer-refereed journals. Her risk-tolerance scale for digital security (CyberDoSpeRT) received the British Academy of Management award. She is also the winner of the UK Women in Technology Award for her contributions to cyber security research and practice.


  • Date: March 23 at 10-11 am AEDT

Speaker: Dr Frank L. Greitzer

Title: Adventures in Insider Threat Predictive Analytics

Slides: Greitzer_CSIRO-Data61 Seminar FINAL_23March2022

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/1597cf5c8c62103a9ffd00505681094b/playback

Abstract: Insiders who destroy, steal, or leak sensitive information pose a serious threat to enterprises. An insider threat is an individual with authorized access to an organization’s systems, data, or assets, and who intentionally (or unintentionally) misuses that access in ways that harm (or risk) these assets. Recent industry surveys reveal that as much as 50% of reported incidents were considered accidental and nearly two-thirds were identified as malicious insider attacks. Along with a consistent rise in insider crimes, the costs of monitoring, incident response, remediation and other associated activities continues to increase. Insider risk assessment is a wicked/hard problem, and the research and operational communities are coming to realize that it is a human problem. Spanning nearly two decades, a strong theme of my research has been to develop insider threat models that integrate relevant human behavioral and psychological factors with technical factors associated with host and network cybersecurity monitoring systems. This lecture will discuss my research on sociotechnical factors for insider threat anticipation and the continuing challenges to identify, integrate, and validate cyber and behavioral indicators of insider threat risk into effective detection and mitigation approaches. I will describe a comprehensive ontology of sociotechnical and organizational factors for insider threat (SOFIT) that can provide a foundation for more effective, whole-person predictive analytic approaches seeking to get “left of boom.” I will review some of my research aiming to inform this ontology and to support the development of more sophisticated, comprehensive, AI-based models for insider threat assessment.

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Past Seminars