The Human Centric AI Seminars Series

The Human Centric Security team are running a new monthly series “The Human Centric AI Seminars” that will focus on various research topics in human centered AI.
For more info contact: Kristen Moore and Tina Wu
Free access to anyone interested in Humans and AI

Next seminar:

Date: Wednesday 10 August 2022 at 10-11am AEST

Speaker: Dr. Elissa M. Redmiles

Title: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

Abstract: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users. This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.

If you have missed the last one:

  • Date: June 15th at 4 pm

Speaker: Professor Phil Morgan; Director of the Cardiff University Human Factors Excellence (HuFEx) Research Group; Director of Research – Cardiff University Centre for AI, Robotics and Human-Machine Systems, School of Psychology, Cardiff University, Cardiff, UK; Technical Lead – Airbus Accelerator in Human-Centric Cyber Security

Title: A Human Factors Approach to Optimising Humans in Cyber Security

Link to the recording https://csiro.webex.com/recordingservice/sites/csiro/recording/d761992fce9e103abbf6005056818c0c/playback

Abstract: There is abundant evidence that suboptimal human thinking and behaviour is linked to ‘successful’ cyber security incidents. In fact, people are often described as the weakest link in cyber security. This rather damning evidence might suggest that software and hardware solutions are the only way to combat cyber attackers and their methods but, and perhaps counterintuitively, I will argue against this technical only approach and forsocio-technical solutions. Through a psychological and Human Factors data driven understanding of our cyber security awareness, knowledge, attitudes, and motivations both within academia and industry – my teams and I have identified most of the factors that can lead to cyber risky behaviours as well as a range of interventions to combat them. During my talk, I will first give an overview of key human cyber vulnerabilities exploited by cyber attackers – from weapons of influence to weaknesses in our understanding of cyber security language and communication. I will then give an overview of some of our gold standard cyber vulnerability and strengths tools from which we have developed metrics, personas and other interventions to effectively combat human cyber risky behaviours. My proposition is that humans can actually be the strongest line of defence in cyber security especially when there is an optimal symbiosis with software (and hardware) solutions developed ‘with’ and ‘for’ us rather than simply with us in mind.

Bio: Prof Phillip Morgan BSc DipRes PhD PGCHE FHEA AFALT AFBPS holds a Personal Chair in Human Factors and Cognitive Science within the School of Psychology at Cardiff University. He is Director of the Human Factors Excellence Research Group (HuFEx) and Director of Research for the Centre for AI, Robotics and Human-Machine Systems (IROHMS). He is an international expert in Cyberpsychology, intelligent-mobility (focus on autonomous vehicles), HMI design, HCI, and interruption/distraction effects. He has been awarded >£20M funding (>£10M direct) across >30 funded grants from e.g., Airbus, ERDF, EPSRC, ESRC, HSSRC IUK, DHC-STC, GoS, SOS Alarm, and the Wellcome Trust, and has published >100 major papers and reports. Phil works on large-scale projects funded by Airbus, where he is seconded (since 2019), part-time, as Technical Lead in Cyber Psychology and Human Factors and Head of the Airbus Accelerator in Human-Centric Cyber Security (H2CS). Prof Morgan is UK PI on an ESRC-JST project (2020-24) (with collaborators at e.g., Universities of Kyoto and Osaka) on the Rule of Law in the Age of AI and autonomous systems with a key focus on blame assignment and trust in autonomous vehicles with XAI and HRI as core interventions. He is currently working on two HSSRC (UK MOD / Dstl / BAE Systems) projects examining HF guidelines for autonomous systems and robots (with QinetiQ & BMT Defence) and complex sociotechnical systems (with Trimetis). He also works on two projects funded by the NCSC focussed on interruptions effects on cyber security behaviours. Prof Morgan has recently completed a project on XAI funded by Airbus. Together with Prof Dylan M Jones OBE – Prof Morgan overseas the IROHMS Simulation Laboratory based within the School of Psychology at Cardiff University that currently comprises five state-of the art zones: immersive dome; transport simulator; cognitive robotics; VR/AR; and a command and control centre (under development).

  • Date: April 27th at 1pm AEST

Speaker: Ganna Pogrebna

Title: The Behavioural Data Science Approach to Cybersecurity

Abstract: Recent advances in artificial intelligence allow us to design new “hybrid” models merging behavioural science and machine learning algorithms. In this talk, I will showcase several recent projects which use a hybrid methodology of behavioural data science to (i) understand people’s risk taking and risk perceptions in cyber spaces; (iii) segment and detect adversarial behaviour; as well as (iii) predict potential targets. The talk will explain the mechanism and potential behind such models using several use cases. It will also demonstrate additional insights which such models deliver beyond traditional machine learning and usual behavioural science methods. Specifically, the talk will show how behavioural data science approach can generate more accurate predictions of human behaviour and help to deliver better organizational outcomes. The talk will also explain how hybrid modelling can help in identification of cybercriminals as well as in using behavioural segmentation to create cybersecurity social marketing campaigns for the general public.

Bio: Ganna Pogrebna is Executive Director of Cyber Security and Data Science Institute at Charles Sturt University and Honourary Professor of Business Analytics and Data Science at the University of Sydney. She is also ESRC-Turing Fellow and Lead for Behavioural Data Science at the Alan Turing Institute in the UK. Her research is on behavioural change for digital security. Ganna’s work was funded by ARC, ONI, NCSC, ESRC, EPSRC, Leverhulme Trust and industry. She is an author of a book for practitioners on cyber security as a behavioural science – “Navigating New Cyber Risks” – as well as blogger at https://www.cyberbitsetc.org/. She published extensively on human behaviour and cyber security in peer-refereed journals. Her risk-tolerance scale for digital security (CyberDoSpeRT) received the British Academy of Management award. She is also the winner of the UK Women in Technology Award for her contributions to cyber security research and practice.

  • Date: March 23 at 10-11 am AEDT

Speaker: Dr Frank L. Greitzer

Title: Adventures in Insider Threat Predictive Analytics

Slides: Greitzer_CSIRO-Data61 Seminar FINAL_23March2022

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/1597cf5c8c62103a9ffd00505681094b/playback

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Abstract: Insiders who destroy, steal, or leak sensitive information pose a serious threat to enterprises. An insider threat is an individual with authorized access to an organization’s systems, data, or assets, and who intentionally (or unintentionally) misuses that access in ways that harm (or risk) these assets. Recent industry surveys reveal that as much as 50% of reported incidents were considered accidental and nearly two-thirds were identified as malicious insider attacks. Along with a consistent rise in insider crimes, the costs of monitoring, incident response, remediation and other associated activities continues to increase. Insider risk assessment is a wicked/hard problem, and the research and operational communities are coming to realize that it is a human problem. Spanning nearly two decades, a strong theme of my research has been to develop insider threat models that integrate relevant human behavioral and psychological factors with technical factors associated with host and network cybersecurity monitoring systems. This lecture will discuss my research on sociotechnical factors for insider threat anticipation and the continuing challenges to identify, integrate, and validate cyber and behavioral indicators of insider threat risk into effective detection and mitigation approaches. I will describe a comprehensive ontology of sociotechnical and organizational factors for insider threat (SOFIT) that can provide a foundation for more effective, whole-person predictive analytic approaches seeking to get “left of boom.” I will review some of my research aiming to inform this ontology and to support the development of more sophisticated, comprehensive, AI-based models for insider threat assessment.

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Past Seminars