The Human Centric AI Seminars Series

The Human Centric Security team are running a new monthly series “The Human Centric AI Seminars” that will focus on various research topics in human centered AI.
For more info contact: Kristen Moore and Tina Wu
Free access to anyone interested in Humans and AI

Next seminar:

Our May 10th seminar will be at the special time of 10am to accommodate our speaker Jiawei Zhou from Georgia Institute of Technology.

TitleSynthetic Lies: Misinformation in the Age of Large Language Models

Abstract: Over the past decade, large language models (LLMs) have rapidly evolved, demonstrating remarkable capabilities in generating texts that are almost indistinguishable from human-written content, and in some cases, even perceived to be more credible. As LLM tools like ChatGPT increasingly penetrate public discourse, it is critical to understand the potential risks posed by their scalability, effectiveness, and customisability. This talk presents our research on examining the characteristics of AI-generated misinformation compared to human-created misinformation. Our work also evaluates the applicability of two common misinformation solutions: detection models and assessment guidelines. By highlighting the challenges posed by AI-generated misinformation, I will conclude by discussing implications for the future development of intervention strategies, detection models, and responsible design of LLM technologies. 

Bio: Jiawei Zhou is a PhD student in Human-Centered Computing at the Georgia Institute of Technology, specializing in Human-AI Interaction and Social Computing. She adopts a theory-guided approach using quantitative and qualitative methods to understand the impacts of collective narratives (such as misinformation, hate speech, and counterspeech) and the role of generative AI in addressing or exacerbating related societal challenges. In particular, her word addresses real-world challenges such as harmful content, responsible use of language models, and social support for vulnerable groups. Her research has been published in top-tier computer science venues including ACM CHI, CSCW, UbiComp/IMWUT, and IEEE ICHI. She has received a paper award at CHI and has been supported by grants from NSF, CDC, and NIH.

 

If you have missed the last one:

  • Date: Wed March 15th at 10am-11am
Speaker: Serge Egelman (UC Berkeley)

Recording: https://csiro.webex.com/csiro/ldr.php?RCID=efd710d1776c8b3977980ea22814420d

TITLE: Taking Responsibility for Someone Else’s Code: Studying the Privacy Behaviors of Mobile Apps at Scale

ABSTRACT: Modern software development has embraced the concept of “code reuse,” which is the practice of relying on third-party code to avoid “reinventing the wheel” (and rightly so). While this practice saves developers time and effort, it also creates liabilities: the resulting app may behave in ways that the app developer does not anticipate. This can cause very serious issues for privacy compliance: while an app developer did not write all of the code in their app, they are nonetheless responsible for it. In this talk, I will present research that my group has conducted to automatically examine the privacy behaviors of mobile apps vis-à-vis their compliance with privacy regulations. Using analysis tools that we developed and commercialized (as AppCensus, Inc.), we have performed dynamic analysis on hundreds of thousands of the most popular Android apps to examine what data they access, with whom they share it, and how these practices comport with various privacy regulations, app privacy policies, and platform policies. We find that while potential violations abound, many of the issues appear to be due to the (mis)use of third-party SDKs (i.e., supply chain problems). I will provide an account of the most common types of privacy and security issues that we observe and how app developers can better identify these issues prior to releasing their apps.

BIO: Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also CTO and co-founder of AppCensus, Inc., which is a startup that is commercializing his research by performing on-demand privacy analysis of mobile apps for developers, regulators, and watchdog groups. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and the Spanish Data Protection Authority’s Emilio Aced Personal Data Protection Research Award. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.

If you have missed the last one:

  • When: Wed Feb 15th at 1pm
 
Speaker: Vassilis Kostakos. Professor of Human-Computer Interaction at Uni Meblourne.  https://people.eng.unimelb.edu.au/vkostakos/

 
Title:
What smartphones can tell us about human behaviour
 
Abstract: 
In this talk I will present our group’s research on studying human behaviour using smartphones. We have developed a platform (AWARE Light) that makes it easy to collect behavioural  data from smartphones. I will give an overview of how we conduct our research, and give numerous examples of the kinds of insight that we can obtain. Smartphones and other personal technologies have the potential to help us understand the nuances of human behaviour systematically and at a large scale.
Bio:
Vassilis Kostakos is a professor of computer science at the University of Melbourne in Australia. He works on ubiquitous computing, human-computer interaction, social computing, and the Internet of Things.  His research focuses on how to use sensor data to understand people’s behaviour, and how to develop everyday technologies that better understand and better respond to humans.
  • Time: 10-11am Nov 30 AEDT
 
Speaker: Professor Debi Ashenden
 
Title: Exploring the Socio-Technical Issues of MLOps

Abstract: New technology has the potential deliver a step change for defence and national security, but it comes with threats as well as opportunities.  The successful delivery of such technology will depend as much on the socio-technical issues around the design, development, and deployment of software as it will on the technology itself.  Modern software development processes such as DevSecOps take advantage of tools and processes that facilitate agile ways of working, continuous integration and delivery, and the development of secure code.  But to be effective DevSecOps also requires trust and a change in culture.  This talk charts previous research that has explored the social practice of software developers to better understand how fracture points in their relationships with cyber security practitioners can impact security risk. When a DevSecOps project succeeds it is because working relationships between security and software development activities are underpinned by mutual trust.  When trust is lacking the process suffers: software developers and security practitioners don’t engage early enough, insufficient time is available to implement security, and an incomplete view is formed of security risks.  MLOps adds to the complexity of security issues in DevSecOps as data scientists interact with the software development process.  This talk outlines research that aims to better understand the social practice of data scientists in the MLOps process.  Understanding these social practices will help us identify potential vulnerabilities in MLOps that could lead to an increase in cyber security risk.

Bio: Debi holds the DST Group-University of Adelaide Chair in Cybersecurity. In addition, she is Professor of Cyber Security at the University of Portsmouth and a visiting Professor at Royal Holloway, University of London. Debi’s research interests are in the social and behavioural aspects of cybersecurity – particularly in finding ways of ‘patching with people’ as well as technology. She is currently researching transdisciplinary approaches to modelling complex warfighting, how to fuse behavioural science with cyber deception, and the socio-technical aspects of designing complex military systems. Debi was previously Head of the Centre for Cyber Security at Cranfield University at the Defence Academy of the UK and was a member of the UK MOD’s Defence Science Expert Committee. She has worked extensively across the public and private sector for organisations such as UK MOD, GCHQ, Cabinet Office, Home Office, Euroclear, Prudential, Barclaycard, Reuters and Close Bros. She has had a number of articles on cyber security published, presented at a range of conferences and co-authored a book for Butterworth Heinemann, Risk Management for Computer Security: Protecting Your Network & Information Assets.

  • Date: Wednesday 2 Nov 2022 1.00pm AEDT

Speaker: Feng Xia (Federation University Australia)

Recording: https://csiro.webex.com/csiro/ldr.php?RCID=57f38a3b06b7f3ab1f8f18993faedb3d

Title: Towards Trustworthy Graph Learning

Abstract: Graphs (or networks) are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs or graph machine learning) is gaining huge attention from both researchers and practitioners. Graph learning proves effective for many tasks in real-world applications, such as regression, classification, clustering, matching, and ranking. Over the past few years, a lot of graph learning models and algorithms (e.g., graph neural networks, network embedding, network representation learning, etc.) have been developed. Nevertheless, the field of graph learning is facing many challenges deriving from, e.g., fundamental theory and models, algorithms and methods, supporting tools and platforms, and real-world deployment and engineering. This talk will give an overview of the state of the art of trustworthy graph learning, paying special attention to relevant trends and challenges. Some recent advancements in this field will be showcased.

Bio: Dr. Feng Xia is currently an Associate Professor in Institute of Innovation, Science and Sustainability, Federation University Australia. He was a Full Professor and Associate Dean of Research in School of Software, Dalian University of Technology (DUT), China. He is/was on the Editorial Boards of over 10 int’l journals. He has served as the General Chair, Program Committee Chair, Workshop Chair, or Publicity Chair of over 30 int’l conferences and workshops, and Program Committee Member of over 90 conferences. Dr. Xia has authored/co-authored two books, over 300 scientific papers in int’l journals and conferences (such as IEEE TAI, TKDE, TNNLS, TC, TMC, TPDS, TBD, TCSS, TNSE, TETCI, TETC, THMS, TVT, TITS, TASE, ACM TKDD, TIST, TWEB, TOMM, WWW, AAAI, SIGIR, CIKM, JCDL, EMNLP, and INFOCOM) and 3 book chapters. He was recognized as a Highly Cited Researcher (2019) by Clarivate Analytics (Web of Science). Dr. Xia received a number of prestigious awards, including IEEE DSS 2021 Best Paper Award, IEEE Vehicular Technology Society 2020 Best Land Transportation Paper Award, ACM/IEEE JCDL 2020 The Vannevar Bush Best Paper Honorable Mention, IEEE CSDE 2020 Best Paper Award, WWW 2017 Best Demo Award, IEEE DataCom 2017 Best Paper Award, IEEE UIC 2013 Best Paper Award, and IEEE Access Outstanding Associate Editor. His research interests include data science, artificial intelligence, graph learning, anomaly detection, and systems engineering. He is a Senior Member of IEEE and ACM, and ACM Distinguished Speaker.

  • Date: Wed Oct 26th at 4pm

Recording: https://csiro.webex.com/webappng/sites/csiro/recording/7703df7a3719103bae79005056811b40/playback

Speaker: Yisroel Mirsky

Title: The Threat Horizon of Deepfakes

Abstract: Deep learning has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. Since 2018, deep learning has been used to re-enact people in `deepfakes’ not only for entertainment but for revenge, fraud, and espionage as well. With rapid advances in generative AI and the ease of access to the technology, we wonder what is on the horizon regarding malicious deepfakes: what will attacks look like in the near future and how will we prevent them? In this talk, we will talk about different types of deepfakes (e.g., human face/voice, medical records, …), how they are made, detected, and their caveats. We will also look into an imminent threat which has recently emerged and give insight into the matter.

Bio: Yisroel Mirsky is a tenure-track lecturer and Zuckerman Faculty Scholar in the Department of Software and Information Systems Engineering at Ben-Gurion University. He received his Ph.D. from BGU in 2018 and was a postdoctoral fellow for two years in the at the Georgia Institute of Technology. He currently heads the Offensive AI research lab in BGU https://ymirsky.github.io/Offensive.AI.Lab/ . His main research interests include deepfakes, adversarial machine learning, anomaly detection, and intrusion detection. Dr. Mirsky has published his work in some of the best security venues: USENIX, CCS, NDSS, Euro S&P, Black Hat, DEF CON, RSA, CSF, AISec, etc. His research has also been featured in many well-known media outlets: Popular Science, Scientific American, Wired, The Wall Street Journal, Forbes, and BBC. Some of his works, include the exposure of vulnerabilities in the US 911 emergency services and research into the threat of deepfakes in medical scans, both featured in The Washington Post.

  • Date: Wednesday 10 August 2022 at 10-11am AEST

Speaker: Dr. Elissa M. Redmiles

Title: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/9d71db0afa6d103a97ff00505681cdcd/playback

Abstract: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users. This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.

  • Date: June 15th at 4 pm

Speaker: Professor Phil Morgan; Director of the Cardiff University Human Factors Excellence (HuFEx) Research Group; Director of Research – Cardiff University Centre for AI, Robotics and Human-Machine Systems, School of Psychology, Cardiff University, Cardiff, UK; Technical Lead – Airbus Accelerator in Human-Centric Cyber Security

Title: A Human Factors Approach to Optimising Humans in Cyber Security

Link to the recording https://csiro.webex.com/recordingservice/sites/csiro/recording/d761992fce9e103abbf6005056818c0c/playback

Abstract: There is abundant evidence that suboptimal human thinking and behaviour is linked to ‘successful’ cyber security incidents. In fact, people are often described as the weakest link in cyber security. This rather damning evidence might suggest that software and hardware solutions are the only way to combat cyber attackers and their methods but, and perhaps counterintuitively, I will argue against this technical only approach and forsocio-technical solutions. Through a psychological and Human Factors data driven understanding of our cyber security awareness, knowledge, attitudes, and motivations both within academia and industry – my teams and I have identified most of the factors that can lead to cyber risky behaviours as well as a range of interventions to combat them. During my talk, I will first give an overview of key human cyber vulnerabilities exploited by cyber attackers – from weapons of influence to weaknesses in our understanding of cyber security language and communication. I will then give an overview of some of our gold standard cyber vulnerability and strengths tools from which we have developed metrics, personas and other interventions to effectively combat human cyber risky behaviours. My proposition is that humans can actually be the strongest line of defence in cyber security especially when there is an optimal symbiosis with software (and hardware) solutions developed ‘with’ and ‘for’ us rather than simply with us in mind.

Bio: Prof Phillip Morgan BSc DipRes PhD PGCHE FHEA AFALT AFBPS holds a Personal Chair in Human Factors and Cognitive Science within the School of Psychology at Cardiff University. He is Director of the Human Factors Excellence Research Group (HuFEx) and Director of Research for the Centre for AI, Robotics and Human-Machine Systems (IROHMS). He is an international expert in Cyberpsychology, intelligent-mobility (focus on autonomous vehicles), HMI design, HCI, and interruption/distraction effects. He has been awarded >£20M funding (>£10M direct) across >30 funded grants from e.g., Airbus, ERDF, EPSRC, ESRC, HSSRC IUK, DHC-STC, GoS, SOS Alarm, and the Wellcome Trust, and has published >100 major papers and reports. Phil works on large-scale projects funded by Airbus, where he is seconded (since 2019), part-time, as Technical Lead in Cyber Psychology and Human Factors and Head of the Airbus Accelerator in Human-Centric Cyber Security (H2CS). Prof Morgan is UK PI on an ESRC-JST project (2020-24) (with collaborators at e.g., Universities of Kyoto and Osaka) on the Rule of Law in the Age of AI and autonomous systems with a key focus on blame assignment and trust in autonomous vehicles with XAI and HRI as core interventions. He is currently working on two HSSRC (UK MOD / Dstl / BAE Systems) projects examining HF guidelines for autonomous systems and robots (with QinetiQ & BMT Defence) and complex sociotechnical systems (with Trimetis). He also works on two projects funded by the NCSC focussed on interruptions effects on cyber security behaviours. Prof Morgan has recently completed a project on XAI funded by Airbus. Together with Prof Dylan M Jones OBE – Prof Morgan overseas the IROHMS Simulation Laboratory based within the School of Psychology at Cardiff University that currently comprises five state-of the art zones: immersive dome; transport simulator; cognitive robotics; VR/AR; and a command and control centre (under development).

  • Date: April 27th at 1pm AEST

Speaker: Ganna Pogrebna

Title: The Behavioural Data Science Approach to Cybersecurity

Abstract: Recent advances in artificial intelligence allow us to design new “hybrid” models merging behavioural science and machine learning algorithms. In this talk, I will showcase several recent projects which use a hybrid methodology of behavioural data science to (i) understand people’s risk taking and risk perceptions in cyber spaces; (iii) segment and detect adversarial behaviour; as well as (iii) predict potential targets. The talk will explain the mechanism and potential behind such models using several use cases. It will also demonstrate additional insights which such models deliver beyond traditional machine learning and usual behavioural science methods. Specifically, the talk will show how behavioural data science approach can generate more accurate predictions of human behaviour and help to deliver better organizational outcomes. The talk will also explain how hybrid modelling can help in identification of cybercriminals as well as in using behavioural segmentation to create cybersecurity social marketing campaigns for the general public.

Bio: Ganna Pogrebna is Executive Director of Cyber Security and Data Science Institute at Charles Sturt University and Honourary Professor of Business Analytics and Data Science at the University of Sydney. She is also ESRC-Turing Fellow and Lead for Behavioural Data Science at the Alan Turing Institute in the UK. Her research is on behavioural change for digital security. Ganna’s work was funded by ARC, ONI, NCSC, ESRC, EPSRC, Leverhulme Trust and industry. She is an author of a book for practitioners on cyber security as a behavioural science – “Navigating New Cyber Risks” – as well as blogger at https://www.cyberbitsetc.org/. She published extensively on human behaviour and cyber security in peer-refereed journals. Her risk-tolerance scale for digital security (CyberDoSpeRT) received the British Academy of Management award. She is also the winner of the UK Women in Technology Award for her contributions to cyber security research and practice.

  • Date: March 23 at 10-11 am AEDT

Speaker: Dr Frank L. Greitzer

Title: Adventures in Insider Threat Predictive Analytics

Slides: Greitzer_CSIRO-Data61 Seminar FINAL_23March2022

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/1597cf5c8c62103a9ffd00505681094b/playback

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Abstract: Insiders who destroy, steal, or leak sensitive information pose a serious threat to enterprises. An insider threat is an individual with authorized access to an organization’s systems, data, or assets, and who intentionally (or unintentionally) misuses that access in ways that harm (or risk) these assets. Recent industry surveys reveal that as much as 50% of reported incidents were considered accidental and nearly two-thirds were identified as malicious insider attacks. Along with a consistent rise in insider crimes, the costs of monitoring, incident response, remediation and other associated activities continues to increase. Insider risk assessment is a wicked/hard problem, and the research and operational communities are coming to realize that it is a human problem. Spanning nearly two decades, a strong theme of my research has been to develop insider threat models that integrate relevant human behavioral and psychological factors with technical factors associated with host and network cybersecurity monitoring systems. This lecture will discuss my research on sociotechnical factors for insider threat anticipation and the continuing challenges to identify, integrate, and validate cyber and behavioral indicators of insider threat risk into effective detection and mitigation approaches. I will describe a comprehensive ontology of sociotechnical and organizational factors for insider threat (SOFIT) that can provide a foundation for more effective, whole-person predictive analytic approaches seeking to get “left of boom.” I will review some of my research aiming to inform this ontology and to support the development of more sophisticated, comprehensive, AI-based models for insider threat assessment.

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Past Seminars

  • Date: Wednesday 2 Nov 2022 1.00pm AEDT

Speaker: Feng Xia (Federation University Australia)

Registration link: https://events.csiro.au/Events/2022/October/19/Towards-Trustworthy-Graph-Learning

Title: Towards Trustworthy Graph Learning

Abstract: Graphs (or networks) are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs or graph machine learning) is gaining huge attention from both researchers and practitioners. Graph learning proves effective for many tasks in real-world applications, such as regression, classification, clustering, matching, and ranking. Over the past few years, a lot of graph learning models and algorithms (e.g., graph neural networks, network embedding, network representation learning, etc.) have been developed. Nevertheless, the field of graph learning is facing many challenges deriving from, e.g., fundamental theory and models, algorithms and methods, supporting tools and platforms, and real-world deployment and engineering. This talk will give an overview of the state of the art of trustworthy graph learning, paying special attention to relevant trends and challenges. Some recent advancements in this field will be showcased.

Bio: Dr. Feng Xia is currently an Associate Professor in Institute of Innovation, Science and Sustainability, Federation University Australia. He was a Full Professor and Associate Dean of Research in School of Software, Dalian University of Technology (DUT), China. He is/was on the Editorial Boards of over 10 int’l journals. He has served as the General Chair, Program Committee Chair, Workshop Chair, or Publicity Chair of over 30 int’l conferences and workshops, and Program Committee Member of over 90 conferences. Dr. Xia has authored/co-authored two books, over 300 scientific papers in int’l journals and conferences (such as IEEE TAI, TKDE, TNNLS, TC, TMC, TPDS, TBD, TCSS, TNSE, TETCI, TETC, THMS, TVT, TITS, TASE, ACM TKDD, TIST, TWEB, TOMM, WWW, AAAI, SIGIR, CIKM, JCDL, EMNLP, and INFOCOM) and 3 book chapters. He was recognized as a Highly Cited Researcher (2019) by Clarivate Analytics (Web of Science). Dr. Xia received a number of prestigious awards, including IEEE DSS 2021 Best Paper Award, IEEE Vehicular Technology Society 2020 Best Land Transportation Paper Award, ACM/IEEE JCDL 2020 The Vannevar Bush Best Paper Honorable Mention, IEEE CSDE 2020 Best Paper Award, WWW 2017 Best Demo Award, IEEE DataCom 2017 Best Paper Award, IEEE UIC 2013 Best Paper Award, and IEEE Access Outstanding Associate Editor. His research interests include data science, artificial intelligence, graph learning, anomaly detection, and systems engineering. He is a Senior Member of IEEE and ACM, and ACM Distinguished Speaker.

If you have missed the last one:

  • Date: Wed Oct 26th at 4pm

Recording: https://csiro.webex.com/webappng/sites/csiro/recording/7703df7a3719103bae79005056811b40/playback

Speaker: Yisroel Mirsky

Title: The Threat Horizon of Deepfakes

Abstract: Deep learning has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. Since 2018, deep learning has been used to re-enact people in `deepfakes’ not only for entertainment but for revenge, fraud, and espionage as well. With rapid advances in generative AI and the ease of access to the technology, we wonder what is on the horizon regarding malicious deepfakes: what will attacks look like in the near future and how will we prevent them? In this talk, we will talk about different types of deepfakes (e.g., human face/voice, medical records, …), how they are made, detected, and their caveats. We will also look into an imminent threat which has recently emerged and give insight into the matter.

Bio: Yisroel Mirsky is a tenure-track lecturer and Zuckerman Faculty Scholar in the Department of Software and Information Systems Engineering at Ben-Gurion University. He received his Ph.D. from BGU in 2018 and was a postdoctoral fellow for two years in the at the Georgia Institute of Technology. He currently heads the Offensive AI research lab in BGU https://ymirsky.github.io/Offensive.AI.Lab/ . His main research interests include deepfakes, adversarial machine learning, anomaly detection, and intrusion detection. Dr. Mirsky has published his work in some of the best security venues: USENIX, CCS, NDSS, Euro S&P, Black Hat, DEF CON, RSA, CSF, AISec, etc. His research has also been featured in many well-known media outlets: Popular Science, Scientific American, Wired, The Wall Street Journal, Forbes, and BBC. Some of his works, include the exposure of vulnerabilities in the US 911 emergency services and research into the threat of deepfakes in medical scans, both featured in The Washington Post.

  • Date: Wednesday 10 August 2022 at 10-11am AEST

Speaker: Dr. Elissa M. Redmiles

Title: Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/9d71db0afa6d103a97ff00505681cdcd/playback

Abstract: A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users. This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences from which to infer best practice rather than using experts’ normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.

  • Date: June 15th at 4 pm

Speaker: Professor Phil Morgan; Director of the Cardiff University Human Factors Excellence (HuFEx) Research Group; Director of Research – Cardiff University Centre for AI, Robotics and Human-Machine Systems, School of Psychology, Cardiff University, Cardiff, UK; Technical Lead – Airbus Accelerator in Human-Centric Cyber Security

Title: A Human Factors Approach to Optimising Humans in Cyber Security

Link to the recording https://csiro.webex.com/recordingservice/sites/csiro/recording/d761992fce9e103abbf6005056818c0c/playback

Abstract: There is abundant evidence that suboptimal human thinking and behaviour is linked to ‘successful’ cyber security incidents. In fact, people are often described as the weakest link in cyber security. This rather damning evidence might suggest that software and hardware solutions are the only way to combat cyber attackers and their methods but, and perhaps counterintuitively, I will argue against this technical only approach and forsocio-technical solutions. Through a psychological and Human Factors data driven understanding of our cyber security awareness, knowledge, attitudes, and motivations both within academia and industry – my teams and I have identified most of the factors that can lead to cyber risky behaviours as well as a range of interventions to combat them. During my talk, I will first give an overview of key human cyber vulnerabilities exploited by cyber attackers – from weapons of influence to weaknesses in our understanding of cyber security language and communication. I will then give an overview of some of our gold standard cyber vulnerability and strengths tools from which we have developed metrics, personas and other interventions to effectively combat human cyber risky behaviours. My proposition is that humans can actually be the strongest line of defence in cyber security especially when there is an optimal symbiosis with software (and hardware) solutions developed ‘with’ and ‘for’ us rather than simply with us in mind.

Bio: Prof Phillip Morgan BSc DipRes PhD PGCHE FHEA AFALT AFBPS holds a Personal Chair in Human Factors and Cognitive Science within the School of Psychology at Cardiff University. He is Director of the Human Factors Excellence Research Group (HuFEx) and Director of Research for the Centre for AI, Robotics and Human-Machine Systems (IROHMS). He is an international expert in Cyberpsychology, intelligent-mobility (focus on autonomous vehicles), HMI design, HCI, and interruption/distraction effects. He has been awarded >£20M funding (>£10M direct) across >30 funded grants from e.g., Airbus, ERDF, EPSRC, ESRC, HSSRC IUK, DHC-STC, GoS, SOS Alarm, and the Wellcome Trust, and has published >100 major papers and reports. Phil works on large-scale projects funded by Airbus, where he is seconded (since 2019), part-time, as Technical Lead in Cyber Psychology and Human Factors and Head of the Airbus Accelerator in Human-Centric Cyber Security (H2CS). Prof Morgan is UK PI on an ESRC-JST project (2020-24) (with collaborators at e.g., Universities of Kyoto and Osaka) on the Rule of Law in the Age of AI and autonomous systems with a key focus on blame assignment and trust in autonomous vehicles with XAI and HRI as core interventions. He is currently working on two HSSRC (UK MOD / Dstl / BAE Systems) projects examining HF guidelines for autonomous systems and robots (with QinetiQ & BMT Defence) and complex sociotechnical systems (with Trimetis). He also works on two projects funded by the NCSC focussed on interruptions effects on cyber security behaviours. Prof Morgan has recently completed a project on XAI funded by Airbus. Together with Prof Dylan M Jones OBE – Prof Morgan overseas the IROHMS Simulation Laboratory based within the School of Psychology at Cardiff University that currently comprises five state-of the art zones: immersive dome; transport simulator; cognitive robotics; VR/AR; and a command and control centre (under development).

  • Date: April 27th at 1pm AEST

Speaker: Ganna Pogrebna

Title: The Behavioural Data Science Approach to Cybersecurity

Abstract: Recent advances in artificial intelligence allow us to design new “hybrid” models merging behavioural science and machine learning algorithms. In this talk, I will showcase several recent projects which use a hybrid methodology of behavioural data science to (i) understand people’s risk taking and risk perceptions in cyber spaces; (iii) segment and detect adversarial behaviour; as well as (iii) predict potential targets. The talk will explain the mechanism and potential behind such models using several use cases. It will also demonstrate additional insights which such models deliver beyond traditional machine learning and usual behavioural science methods. Specifically, the talk will show how behavioural data science approach can generate more accurate predictions of human behaviour and help to deliver better organizational outcomes. The talk will also explain how hybrid modelling can help in identification of cybercriminals as well as in using behavioural segmentation to create cybersecurity social marketing campaigns for the general public.

Bio: Ganna Pogrebna is Executive Director of Cyber Security and Data Science Institute at Charles Sturt University and Honourary Professor of Business Analytics and Data Science at the University of Sydney. She is also ESRC-Turing Fellow and Lead for Behavioural Data Science at the Alan Turing Institute in the UK. Her research is on behavioural change for digital security. Ganna’s work was funded by ARC, ONI, NCSC, ESRC, EPSRC, Leverhulme Trust and industry. She is an author of a book for practitioners on cyber security as a behavioural science – “Navigating New Cyber Risks” – as well as blogger at https://www.cyberbitsetc.org/. She published extensively on human behaviour and cyber security in peer-refereed journals. Her risk-tolerance scale for digital security (CyberDoSpeRT) received the British Academy of Management award. She is also the winner of the UK Women in Technology Award for her contributions to cyber security research and practice.

  • Date: March 23 at 10-11 am AEDT

Speaker: Dr Frank L. Greitzer

Title: Adventures in Insider Threat Predictive Analytics

Slides: Greitzer_CSIRO-Data61 Seminar FINAL_23March2022

Recording: https://csiro.webex.com/recordingservice/sites/csiro/recording/1597cf5c8c62103a9ffd00505681094b/playback

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Abstract: Insiders who destroy, steal, or leak sensitive information pose a serious threat to enterprises. An insider threat is an individual with authorized access to an organization’s systems, data, or assets, and who intentionally (or unintentionally) misuses that access in ways that harm (or risk) these assets. Recent industry surveys reveal that as much as 50% of reported incidents were considered accidental and nearly two-thirds were identified as malicious insider attacks. Along with a consistent rise in insider crimes, the costs of monitoring, incident response, remediation and other associated activities continues to increase. Insider risk assessment is a wicked/hard problem, and the research and operational communities are coming to realize that it is a human problem. Spanning nearly two decades, a strong theme of my research has been to develop insider threat models that integrate relevant human behavioral and psychological factors with technical factors associated with host and network cybersecurity monitoring systems. This lecture will discuss my research on sociotechnical factors for insider threat anticipation and the continuing challenges to identify, integrate, and validate cyber and behavioral indicators of insider threat risk into effective detection and mitigation approaches. I will describe a comprehensive ontology of sociotechnical and organizational factors for insider threat (SOFIT) that can provide a foundation for more effective, whole-person predictive analytic approaches seeking to get “left of boom.” I will review some of my research aiming to inform this ontology and to support the development of more sophisticated, comprehensive, AI-based models for insider threat assessment.

Bio: Frank L. Greitzer, Ph.D., is owner and Principal Scientist of PsyberAnalytix, which performs consulting in applied cognitive and behavioral systems engineering and analysis. Dr. Greitzer holds a PhD degree in Mathematical Psychology with specialization in memory and cognition and a BS degree in Mathematics.  His current research interests are in characterizing human behavioral factors to help identify and mitigate insider threats to IT enterprises. He led a multidisciplinary group of researchers to develop a comprehensive insider threat ontology, Sociotechnical and Organizational Factors for Insider Threat (SOFIT). His most recent consulting work has helped organizations apply this ontology in their operational insider threat mitigation programs. Prior to founding PsyberAnalytix in 2012, Dr. Greitzer served for twenty years as a Chief Scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory, conducting R&D in human-information analysis and in advanced, interactive training technologies; and leading the R&D focus area of Cognitive Informatics, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. His experience also includes university/academic positions, research in human factors psychology for the U.S. Department of Defense, and human factors/artificial intelligence R&D in private industry. Dr. Greitzer is a member of the Intelligence and National Security Alliance (INSA) Insider Threat Subcommittee and is currently Editor-in-Chief of the journal, Counter-Insider Threat Research and Practice.

Past Seminars