Our SAO seminars

Events organised in collaboration with Cyber Security CRC, supported by the Commonwealth. In those seminars we both showcase External Speakers latest research and Internal CSCRC related research activities. You can find recordings of all our past events down below.

Next monthly events:

  • Date and time: Thursday 27/11/24, 9.30 AEST

Speaker: Dr. Bimal Viswanath, Assistant Professor of Computer Science at Virginia Tech, USA

Title: Investigating Foundation Models Through the Lens of Security

Access link: https://webcast.csiro.au/#/webcasts/llmlenssec

Abstract: Foundation models (e.g., LLMs) are trained to recognize patterns in broad data, which can then be applied to a range of downstream tasks with further adaptation. Such models are seeing wide-spread use in a variety of NLP and computer vision tasks. How would the threat landscape change if an adversary leveraged foundation models? Can foundation models simplify and enhance the performance of ML-based security pipelines?  How can we safely customize foundation models on untrusted training data for downstream use cases? I will try to answer the above questions, by investigating foundation models in the context of two security problems: (1) Deepfake image detection: Recent research highlighted the strengths of using a foundation model to improve generalization performance of deepfake image detectors. This advance significantly simplifies the development of such defenses while promising superior performance. We take a closer look at the integration of foundation model technology into these defenses, and test their performance on real-world deepfake datasets. We identify serious limitations and present directions for further improvement. I will also discuss the implications of an adaptive attacker who uses foundation models, and how this can tilt the arms race in favor of the attacker. (2) Safely customizing LLMs to build chatbots: Today, chatbots can be built for specialized domains by further customizing a foundation model (LLM) on a new conversational dataset. However, such datasets can be untrustworthy leading to unwanted behavior (e.g., toxic responses). I will present a new framework to customize foundation models that is resilient to such data poisoning attacks. I will also present a novel method to adapt LLMs as toxic language classifiers as part of this new framework.

Bio: Bimal Viswanath is an Assistant Professor of Computer Science at Virginia Tech, USA. His research interests are in security and his ongoing work investigates machine learning systems through the lens of security. He uses data-driven methods to understand new threats raised by advances in machine learning, and also investigates how machine learning can improve security of online services. He obtained his PhD from the Max Planck Institute for Software Systems, and MS from IIT Madras. He also worked as a Researcher at Nokia Bell Labs before starting an academic position

If you have missed our latest events:

  • Date and Time: Wednesday 13/11/24, 9.30 AEST

Title: Identity: Opportunity, Emerging Trends and the Road Ahead

Speaker: Dr Sunpreet Singh Arora, who is leading the identity and AI research teams in Visa Research, USA, https://usa.visa.com/about-visa/visa-research/sunpreet-arora.html

Recording:

Abstract: Payment systems worldwide are gradually transforming to ‘identity-first’. In this talk, I’ll briefly summarize the opportunity that exists in the identity space, discuss key emerging trends shaping the aforementioned transformation, and share some cutting-edge work we have conducted in this space.

Bio: Dr. Sunpreet Singh Arora leads the identity and AI research teams in Visa Research, USA. He is a co-inventor on over 100 patent applications (40 of which have been granted) and has published over 25 research articles in top international conferences and journals in identity and machine learning security. His research has received multiple awards including best paper, best demo and best poster awards and has been featured in several national and international media outlets, such as Forbes, MIT Technology Review, ACM Multimedia, NASA Tech Briefs and BBC UK. Dr. Arora received his Ph.D. in Computer Science and Engineering from Michigan State University, and B.Tech.(Hons.) in Computer Science and Engineering from IIIT-Delhi. He currently serves as associate editor of IEEE Transactions on Biometrics, Behavior and Identity Science and as a board representative for Visa at Open ID. He previously led the IEEE Biometrics Council Industrial Committee, and served as the board representative for Visa at FIDO.


  • Date and Time: Wednesday 13/11/24, 12.30 AEST

Title: Staving off the IoT Armageddon: An Argument for Active Roots of Trust and Trusted Execution Environments

Speaker: Prof. Gene Tsudik, Distinguished Professor of Computer Science at the University of California, Irvine (UCI), USA, in collaboration with the UNSW seminar sessionhttps://ics.uci.edu/~gtsudik/

Recording: https://webcast.csiro.au/#/videos/fa6c90b5-ffe2-4c24-bc78-430217e62e4a

Abstract: IoT devices are increasingly popular and ubiquitous in numerous everyday settings. These specialized gadgets sense and actuate the environment using a wide range of analog peripherals. They are often deployed in large numbers and perform critical tasks. It is no surprise that they represent attractive targets for various attacks. Adversaries range from nation-states to hacker collectives, from organized crime to individual malcontents. Their goals tend to target one or more of: sensing, actuation, or zombification. Recent history shows that few lessons were learned from well-known attacks. IoT devices are still commonly compromised via both known attacks and zero-day exploits. The worst is yet to come. Unfortunately, since the security research community is now hyper-focused on real, alleged, and imagined dangers of AI/ML, there is a risk of missing the real and present danger posed by the rampant insecurity of the burgeoning IoT ecosystem. To remedy the situation, a two-pronged effort is needed: (i) proactive research that goes beyond finding flaws in current technology, and (ii) regulation/legislation incentivizing or forcing manufacturers to take security seriously. This talk will consider several reasons for the current state of affairs in IoT (in)security: (1) connectivity, meaning remote accessibility, (2) malleable software, (3) device manufacturers, and (4) IoT monocultures.  Next, we will attempt to make a case for a line of research on actively secure and formally assured operation of IoT devices. This research is both important and timely: while the speaker-anticipated “IoT Armageddon” might not occur in the very near future, the time for preparing for its mitigation is now. Common sense dictates that it is better to be prepared for a disaster that never comes than to be unprepared for the one that does. The recent pandemic is one obvious motivating example. Time permitting, this talk conclude with a brief, unsolicited, and likely unwanted, commentary on the state of academic security research.

Bio: Gene Tsudik is a Distinguished Professor of Computer Science at the University of California, Irvine (UCI). He obtained his Ph.D. in Computer Science from USC. Before coming to UCI in 2000, he was at the IBM Zurich Research Laboratory (1991-1996) and USC/ISI (1996-2000). His research interests include numerous topics in security, privacy, and applied cryptography. Gene Tsudik was a Fulbright Scholar and a Fulbright Specialist (thrice). He is a fellow of ACM, IEEE, AAAS, IFIP, and a foreign member of Academia Europaea. From 2009 to 2015, he served as the Editor-in-Chief of ACM TOPS. He received the 2017 ACM SIGSAC Outstanding Contribution Award, the 2020 IFIP Jean-Claude Laprie Award, the 2023 ACM SIGSAC Outstanding Innovation Award, and the 2024 Guggenheim Fellowship. His magnum opus is the first ever rhyming crypto-poem published as a refereed paper. Gene Tsudik is allergic to over-hyped topics, such as machine learning, blockchains/cryptocurrencies, and differential privacy. He has zero social media presence.


  • Seminar date/time: Tuesday, 05th November 10-11 am Sydney/AU AEDT (Sydney time)

Slides  GenAIforSocialGood-Shared.pdf

Recording https://webcast.csiro.au/#/videos/d19327c0-c53c-40b8-8853-aecde77b62d1

Speaker: Prof. Hongxin Hu is a Professor and Associate Chair of the Department of Computer Science and Engineering at the University at Buffalo, SUNY USA

Title: AI for Social Good in the Era of Large Language Models

Abstract: In the era of large language models (LLMs), the landscape of artificial intelligence has transformed dramatically, offering unprecedented opportunities for social impact. This talk will discuss the potential of LLMs to drive significant advancements in areas critical for social good. I will discuss the innovative applications of these models in diverse fields, such as fighting online hate and making games safer for kids. The talk will also critically examine the safety/security challenges and responsibilities inherent in deploying LLMs. By highlighting both the successes and challenges, this talk aims to foster a nuanced understanding of how LLMs can be safely and effectively utilized for the betterment of society.

Bio: Hongxin Hu is a Professor and Associate Chair of the Department of Computer Science and Engineering at University at Buffalo, SUNY. He is a recipient of the NSF CAREER Award (2019) and Amazon Research Award (2022). His research spans security, machine learning, and networking. He has participated in multiple cross-university, cross-disciplinary projects funded by NSF. His research has also been funded by NSA, U.S. Army, USDOT, Google, VMware, Amazon, etc. He has published over 150 refereed technical papers, many of which appeared in top-tier conferences such as S&P, CCS, USENIX Security, NDSS, SIGCOMM, NSDI, NeurIPS, ICML, and CHI, and well-recognized journals such as IEEE TIFS, IEEE TDSC, IEEE/ACM TON, and IEEE TKDE. He is the recipient of ACM SACMAT Test-of-Time Award in 2024, and the Best Paper Awards from ACM ASIACCS (2022), ACSAC (2020), IEEE ICC (2020), ACM SIGCSE (2018), and ACM CODASPY (2014). His research has won the First Place Award in ACM SIGCOMM 2018 SRC. His research has also been featured by the IEEE Special Technical Community on Social Networking and received 50+ press coverage including ACM TechNews, InformationWeek, Slashdot, etc.


  • Seminar date/time: Wednesday, 2/10/24, 10.00 – 11.00 (Sydney time) 

Speaker: Dr. Siddharth Garg is Associate Professor of New York University, USA, https://engineering.nyu.edu/faculty/siddharth-garg

Recording: https://webcast.csiro.au/#/videos/cceb93c8-3b65-4d1e-9a8f-69b54e37b8af

Title: Foundation Models: The Good, the Backdoors and the Ugly

Abstract: Foundation models, massive neural networks trained at great expense, will form the backbone of a rapidly growing AI/ML ecosystem. Indeed, foundation models have shown impressive capabilities in generalizing to a broad range of tasks. I will start with a note of optimism (the good) and describe our recent success in tailoring large language models (LLMs) for chip design. With few exceptions,
however, these models are black-boxed and only accessible via cloud APIs, with no visibility into training data and training scripts. As users, we are expected to trust foundation models with our sensitive data and to trust their responses. I argue that there is good reason to be skeptical of blackbox foundation models. I will highlight four key concerns, and in some cases, mitigations, from the work in my group. These are: (1) data breaches and privacy-preserving model inference;  (2) security bugs in LLM generated code; (3) malicious backdoors; and (4) demographic bias.

Bio: Siddharth Garg is currently the Institute Associate Professor of ECE at NYU Tandon, where he leads the EnSuRe Research group (https://wp.nyu.edu/ensure_group/). Prior to that he was in Assistant Professor also in ECE from 2014-2020, and an Assistant Professor of ECE at the Unversity of Waterloo from 2010-2014. His research interests are in machine learning, cyber-security and computer hardware design.
He received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2009, and a B.Tech. degree in Electrical Enginerring from the Indian Institute of Technology Madras. In 2016, Siddharth was listed in Popular Science Magazine’s annual list of “Brilliant 10” researchers. Siddharth has received the NSF CAREER Award (2015), and paper awards at the IEEE Symposium on Security and Privacy (S&P) 2016, USENIX Security Symposium 2013, at the Semiconductor Research Consortium TECHCON in 2010, and the International Symposium on Quality in Electronic Design (ISQED) in 2009. Siddharth also received the Angel G. Jordan Award from ECE department of Carnegie Mellon University for outstanding thesis contributions and service to the community. He serves on the technical program committee of several top conferences in the area of computer engineering and computer hardware, and has served as a reviewer for several IEEE and ACM journals.


  • Seminar date/time: Monday, 15 July 2024, 10-11 am AEST (Sydney time) 

Speaker: Dr Yang Zhang is a tenured faculty (equivalent to a full professor) at CISPA Helmholtz Center for Information Security, Germanyfor our CRC/Data61 seminar series  https://yangzhangalmo.github.io/  

Recording: Safety Assessment of Large Generative Models 15/7/2024 (csiro.au)

Slides: talk_20245

Title: Safety Assessment of Large Generative Models 

Abstract: During the past two years, large generative models like Stable Diffusion and ChatGPT have made tremendous progress. While reshaping our daily lives, recent research shows that these large models have severe security and safety issues. In this talk, I will cover some of our recent works in this field. First, I will talk about safety and security attacks against text-to-image generative models, like prompt stealing and unsafe generation. Second, I will focus on large language models, and discuss jailbreak attacks and machine-generated text detection/attribution. 

Bio: Yang Zhang (https://yangzhangalmo.github.io/) is a tenured faculty (equivalent to full professor) at CISPA Helmholtz Center for Information Security, Germany. His research concentrates on trustworthy machine learning. Moreover, he works on measuring and understanding misinformation and unsafe content like hateful memes on the Internet. Over the years, he has published multiple papers at top venues in information security, including CCS, NDSS, Oakland, and USENIX Security. His work has received the NDSS 2019 distinguished paper award and the CCS 2022 best paperaward runner-up. 


  • Seminar date/time: Thursday, 23 May 2024, 10-11 am AEST (Sydney time) 

Speaker: Prof. Yvo Desmedt, Jonsson Distinguished Professor, The University of Texas at Dallas  https://profiles.utdallas.edu/yvo.desmedt 

Recording: https://webcast.csiro.au/#/videos/aee21a08-f412-424b-b141-22f025746058

Slides: CSIRO

Title:  A  Too Limited List of Infrastructures Identified as Critical  

AbstractThe report of the President’s Commission on Critical Infrastructure Protection (http://www.pccip.gov) identified Information and Communications, Electrical Power Systems, Gas and Oil Transportation and Storage, Banking and Finance, Transportation, Water Supply Systems, Emergency Services, and Government Services as critical.  The report has limited itself to: those organizations for which a serious attack would have immediately a visible impact, and non-manufacturing areas of the economy. By doing so, it has failed to address attacks which impact only becomes visible after several weeks, or even months, and which long term effect may be worse than the scenarios identified in the report. In this talk we identify areas that are very critical to the economy that are heavily computerized. We also explain how attacks can be mounted that will take a long time to detect, from which recovery will be slow, and that target those sectors that play the largest role in the economy and in the survival of the country.  These include the: agricultural, integrated circuit (chip) manufacturing and mechanical manufacturing sectors. One should note that malicious code is not only able to shutdown plants, airports, transportation, etc., but that it can also destroy plants. Indeed, increasing the computer controlled temperature in a chemical plant may cause an explosion. A timebomb in the computer code to control robots can cause these to destroy the goods these robots are processing. A variant of this attack will decrease the efficiency of a plant, which may be detected too late to avoid the company to go bankrupt. Another use of this strategy is to hack code in farming equipment employed to plant and maintain crops such that the plants never mature to grow fruits or seeds. The unibomber has demonstrated that terrorists may have a PhD. A CAD expert can write a computer virus to modify chip design. Note that while bearings where the most critical component of a mechanical society, the chip is clearly the current one. 

Bio: Yvo Desmedt is Jonsson Distinguished Professor at Department of Computer Science, The University of Texas at Dallas. Prior to joining, he was Chair of Information Communication Technology (2004-2012) and BT-Chair of Information
Security (2004-2009) at University College London, UK. He was also the Head of the Information Security Group in Computer Science.  At Florida State University (1999-2004), U.S., he was the Founding Director of the Laboratory of Security and Assurance in Information Technology, one of the first 14 NSA Centers of Excellence. He is a Fellow of the International Association of Cryptologic Research (IACR) and a Member of the Belgium Royal Academy of  Science. He is the Editor-in-Chief of IET Information Security and Chair of the Steering Committee of CANS. He was requested to give feedback, in 1998 on the report by the US Presidential Commission on Critical Infrastructures Protection, and in 2020 on the Chinese list of Top 10 Scientific Issues Concerning Development of Human Society. Moreover he commented on some US NIST standards and suggested that NIST makes a Threshold Cryptography standard. He proposed the first Hardware Trojan (Proc. of Crypto 1986), searchable encryption and what is now called functional encryption (both at the 1993 New Security Paradigms Workshop, Proc.).  His work has been funded by e.g., ARC, DARPA, EPSRC, and NSF.  His current research interests include Access Control, Cryptanalysis, Entity Authentication, E-Voting, Game Theory, Oblivious Transfer, Quantum Computations, Secret Sharing, and Unreliable and Untrusted Clouds 


  • Seminar date/time: Friday, 24 May 2024, 10-11 am AEST (Sydney time)

Speaker: Dr. Yuandong Tian, Research Scientist and Senior Manager, Meta AI Research https://yuandong-tian.com/  

Recording: https://webcast.csiro.au/#/videos/01ce98a0-05e6-45c1-a715-3b5802e7e9cd

Slides: csiro_yuandong_tian_May23

Title: Towards Inside-out Interpretability: Black-box Scrutiny and White-box Understanding for LLMs  

AbstractWhile Large Language Models (LLMs) have demonstrated remarkable efficacy across diverse applications, how they work remains elusive and requires substantial study. Two different ways exist to tackle this problem, the black-box approach that probes a model with diverse input and checks its output, and the white-box approach that analyzes its behavior from first principles. In this talk, we mention our recent research works along the two directions. For black-box approach, we propose AdvPrompter that learns to create an adaptive and human-readable suffix to jailbreak a LLM in ~2 seconds, ~800x faster than existing methods; for white-box approach, we reveal how the sparsity of self-attention changes by studying the training dynamics of LLMs, and provide a hypothesis on how latent hierarchy can be learned from the dataset. 

Bio: Yuandong Tian is a Research Scientist and Senior Manager in Meta AI Research (FAIR), working on efficient training, inference and understanding of Large Language Models (LLMs), AI-guided optimization and decision-making. He has been the main mentor of recent works StreamingLLM and GaLore that improves the training and inference of LLM, and the project lead for OpenGo project that beats professional players with a single GPU during inference. He is the first-author recipient of 2021 ICML Outstanding Paper Honorable Mentions and 2013 ICCV Marr Prize Honorable Mentions, and also received the 2022 CGO Distinguished Paper Award. Prior to that, he worked in Google Self-driving Car team in 2013-2014 and received a Ph.D in Robotics Institute, Carnegie Mellon University in 2013. He has been appointed as area chairs for NeurIPS, ICML, AAAI, CVPR and AIStats. 


  • Seminar date/time: Wednesday, 03 April 2024, 1:00 pm-2:00 pm (Sydney time) 

Speaker: Associate Professor, Mark Yampolskiy, Auburn University, USA https://www.eng.auburn.edu/directory/mzy0033.html  

Title:  Additive Manufacturing Security – The Field Overview 

Recording: https://webcast.csiro.au/#/videos/0ae481d5-1afd-467c-9f81-bc4773189f43

Slides: Mark Yampolskiy – AM Security – Field Overview @ CSIRO (with videos)

Abstract: Additive Manufacturing (AM), often referred to as 3D Printing, is a rapidly growing multibillion-dollar industry. AM can be used with a variety of materials, including polymers, metal alloys, and composite materials. This enables a wide range of applications for 3D-printed parts, including aerospace, automotive, and medical fields. However, this success makes AM an increasingly attractive target for attacks. As is often the case, securing a new technology is not just a “plug-and-play” of already existing cyber-security solutions. While cyber-security is a necessary component of AM Security, both attacks and defenses can leverage the cyber-physical nature of this technology. This talk will provide a broad overview of the AM Security research field, outlining both challenges and opportunities that it presents.  

Bio: Dr. Mark Yampolskiy earned his Ph.D. in computer science from Ludwig Maximilian University of Munich in 2009. He is currently an Associate Professor at Auburn University, department of Computer Science and Software Engineering (CSSE). He is also an Affiliated Faculty with Auburn Cyber Research Center (ACRC) and National Center for Additive Manufacturing Excellence (NCAME).  Dr. Yampolskiy was among the pioneers and is one of the leading experts in the field of Additive Manufacturing (AM) Security. His research interests include the cyber-physical means of attack and defense in AM. For example, how side-channels can be used in AM to bypass end-to-end encryption of digital designs, or how the same side-channel data can be leveraged to detect and investigate sabotage attacks in AM. Dr. Yampolskiy chairs and organizes various conferences and journal special issues in the field. With ASTM International, he is leading a working group to provide guidelines and standards for AM Security.  


  • Seminar date/time: Wednesday, 20 March 2024, 10:00am-11:00am AEST (Sydney time) 

Speaker: Dr. Yousra Aafer, Assistant Professor, University of Waterloo, Canada https://cs.uwaterloo.ca/~yaafer/ 

Recording:https://webcast.csiro.au/#/videos/245f3c56-1c51-4fd8-8e46-5cd2ee5a599b

Title: Probabilistic access control recommendations and auditing for Android APIs. 

Abstract: Access control anomalies within the Android framework can allow malicious actors to illicitly access and act on sensitive resources. Prominent security-policy inspection techniques have been proposed to detect such anomalies in the Android operating system. However, the existing approaches suffer from high false positive rates as they rely on simplistic patterns to model the highly complex Android access control mechanism. We observe that access-control related properties are highly uncertain in the context of Android. Linking resources to required access control entails a degree of uncertainty. In this talk, we will present our next-generation access control recommendation and auditing framework for Android APIs, that leverages probabilistic techniques and static analysis to model and infer access control implications. Our findings demonstrate the promise of our technique via the discovery of actual vulnerabilities. 

 Bio: Yousra Aafer is an assistant professor in the Cheriton School of Computer Science at the University of Waterloo. She worked previously at Purdue University, Microsoft and Samsung. Her research interests span the areas of systems security and software engineering, specifically focusing on mobile and smart device security. She has led science into top publication venues such as Usenix security, IEEE S&P, CCS, and NDSS. She is also a regular contributor to PC of Usenix Secuirty, CCS and other top venues.  


  • Seminar date/time: Wednesday Nov 15th from 11-12 pm AEST (Sydney time) 

Speaker: Sanchari Das, Assistant Professor, University of Denver, USA https://ritchieschool.du.edu/about/people/sanchari-das

 Recording: https://webcast.csiro.au/#/videos/8db0dfc3-afd3-41ae-84b5-14417125942c

Slides: not available

Title: Beyond the Norm: Exploring Authentication Challenges for Older Adults and Non-WEIRD Populations 

 Abstract: In today’s technology-driven world, the topic of digital authentication often centers around mainstream users, largely neglecting the unique experiences of marginalized communities such as the : older adults and those outside of the Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations. This research dives into these underrepresented segments to shed light on the distinct challenges and perspectives they bring to the realm of digital authentication. Through comprehensive data gathering, including surveys, interviews, and real-time observations, we uncover the pain points and barriers faced by these demographics. Importantly, this talk moves beyond merely highlighting challenges by presenting solutions tailored to their specific needs. By encompassing areas from tactile feedback to cultural nuances in security, our findings propose innovative strategies for a more inclusive approach to authentication. In the era of global connectivity, it’s crucial that the digital world is accessible and secure for all, regardless of age or cultural background. 

Bio: Dr. Sanchari Das is an Assistant Professor in the Department of Computer Science at the Ritchie School of Engineering and Computer Science, University of Denver. She leads the Inclusive Security and Privacy-focused Innovative Research in Information Technology (InSPIRIT) and Secure Realities Labs, focusing on computer security, privacy, human-computer interaction, accessibility, and the sustainability of emerging technologies. Dr. Das received her Ph.D. from Indiana University Bloomington, with a dissertation on users’ risk mental models in authentication technologies. She holds a Masters in Security Informatics from Indiana University Bloomington, a Masters in Computer Applications from Jadavpur University, and a Bachelors in Computer Applications from The Heritage Academy.Beyond academia, she served as a User Experience Consultant at Parity Technology, as a Global Privacy Adviser at XRSI.org and has gained industry experience at American Express, Infosys Technologies, and HCL Technologies. Her work, published in top-tier academic venues such as CHI, FC, and SOUPS, has also been presented at prominent security conferences, including BlackHat and RSA, and received media coverage in outlets like CNET and PC Magazine. In her teaching and research, Dr. Das is committed to shaping the next generation of security professionals and to creating secure, user-centered systems. 


  • Seminar date/time: 7/11/23

Title: On designing Social Norm Grounded Privacy-preserving Systems

Recording: link

Slides: talk-csiro-data61-11-2023-final

Speaker: Mainack Mondal, Assistant Professor, IIT Kharagpur, India https://cse.iitkgp.ac.in/~mainack/

Abstract: Today, data privacy (collection, storage, sharing, and processing of personal data) is often highlighted in public discourse with the advent of a multitude of recent government-mandated privacy regulations like GDPR and CCPA. To that end, in this talk, I will discuss our current and ongoing body of work on creating social norm-grounded privacy-preserving systems—systems that help to align the collection, sharing, or storage of large-scale personal user data in online systems with rules collectively created by groups of users in particular and society in general. I will give an overview of our work in this space and focus on two use cases: first, our CSCW’23 work on uncovering culture-specific privacy norms of disclosure regarding interpersonal relations; and second, our CCS’23 work on uncovering the mental models of users regarding the security of emerging multi-device cryptocurrency wallets (and how the mental models affect the adoption of such wallets). I will conclude this talk by touching on our general research agenda of understanding, designing, and building human-in-the-loop, private, secure, and abuse-free systems. 

Bio: Dr. Mainack Mondal is an assistant professor of Computer Science at IIT Kharagpur. He completed his Ph.D. from the Max Planck Institute for Software Systems (MPI-SWS), Germany, in 2017. Prior to joining IIT Kharagpur, he was a postdoctoral researcher at the University of Chicago and Cornell Tech. Mainack is broadly interested in incorporating human factors into security and privacy and consequently designing usable online services. Specifically, he works on developing systems that provide usable privacy and security mechanisms to online users while minimizing system abuse. His work has led to papers in Usenix Security,  ACM’s CCS, NDSS, AsiaCCS, PETS, AAAI’s ICWSM, Usenix’s SOUPS, ACM’s CSCW, ACM’s CoNExt and Usenix’s EuroSys among others. His work also received a distinguished paper award in Usenix’s SOUPS and Google India faculty research award in 2022. 


  • Seminar date/time: Wednesday 18 October 2023 1-2 pm AEST (Sydney time)​ 

Speaker: Assistant Professor, Yuan Tian, University of California, Los Angeles (UCLA) https://www.ytian.info/ 

Title: Towards Regulated Security and Privacy in Emerging Computing Platforms

Recording:link

Slides:

Abstract:  Computing is undergoing a significant shift. First, the explosive growth of the Internet of Things (IoT) enables users to interact with computing systems and physical environments in novel ways through perceptual interfaces (e.g., microphones and cameras). Second, machine learning algorithms collect huge amounts of data and make critical decisions on new computing systems. While these trends bring unprecedented functionality, they also drastically increase the number of untrusted algorithms, implementations, interfaces, and the amount of private data they process, endangering user security and privacy. To regulate these security and privacy issues, privacy regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) went into effect. However, there is a massive gap between the desired high-level security/privacy/ethical properties (from regulations, specifications, and users’ expectations) and low-level real implementations. To bridge the gap, my work aims to 1) change how platform architects design secure systems, 2) assist developers by detecting security and privacy violations of implementations, and 3) build usable and scalable privacy-preserving systems. In this talk, I will present how my group designs principled solutions to ensure the security and privacy of emerging computing platforms. I will introduce two developer tools we build to detect security and privacy violations with machine-learning-augmented analysis. Using the tools, we found large numbers of policy violations in web plugins and security property violations in IoT messaging protocol implementations. Additionally, I will discuss our recent work on scalable privacy-preserving machine learning, the first privacy-preserving machine learning framework for modern machine learning models and data with all operations on GPUs.  

Bio:  Yuan Tian is an Assistant Professor of Electrical and Computer Engineering and the Institute for Technology, Law and Policy (ITLP) at the University of California, Los Angeles. She was an Assistant Professor at the University of Virginia, and she obtained her Ph.D. from Carnegie Mellon University in 2017. Her research interests involve security and privacy and their interactions with computer systems, machine learning, and human-computer interaction. Her current research focuses on developing new computing platforms with strong security and privacy features, particularly in the Internet of Things and machine learning. Her work has real-world impacts as countermeasures and design changes have been integrated into platforms (such as Android, Chrome, Azure, and iOS) and also impacted the security recommendations of standard organizations such as the Internet Engineering Task Force (IETF). She is a recipient of the Okawa Fundation Award 2022, Google Research Scholar Award 2021, Facebook Research Award 2021, NSF CAREER award 2020, NSF CRII award 2019, Amazon AI Faculty Fellowship 2019. Her research has appeared in top-tier venues in security, machine learning, and systems. Her projects have been covered by media outlets such as IEEE Spectrum, Forbes, Fortune, Wired, and Telegraph. 


  • Seminar date/time: Wednesday, 13 September 2023, 1-2 pm AEST (Sydney time)  

Speaker: Visiting Assistant Professor, Zhikun Zhang, Stanford University 

Recording:link

Slides: 2023.9 @ CSIRO

Title: Privacy Preservation in Data Life Cycle

Abstract: Data is of paramount importance in today’s digital age. However, the collection and utilization of data also raise significant privacy concerns. In this talk, I will discuss the data privacy preservation issues through the lens of data life cycle, which consists of data collection, data publication, data analysis, and data consumption phases. Concretely, I will first introduce the privacy preservation issues in all phases of the data life cycle, including privacy preservation in the data collection and data publication phase, privacy risk assessment of machine learning systems used in the data analysis phase, and the technical implementation of privacy-related laws in the data consumption phase. I will then briefly introduce my current research in all these topics. 

Bio: Zhikun Zhang is currently a Visiting Assistant Professor at Stanford University and Research Group Leader at CISPA Helmholtz Center for Information Security, Germany. Prior to that, He was a postdoctoral researcher at CISPA. He was also a visiting scholar at Purdue University. His research interest concentrates on private computation, differential privacy, and machine learning security & privacy. Zhikun has published more than 30 related papers in top-tier conferences and journals, including 16 big four security conference papers. He serves as the TPC member of multiple top-tier conferences, including CCS, NDSS, KDD, PoPETs, and ICLR. More information is available at his personal website http://zhangzhk.com/. 


  • Seminar date/time: Wednesday, 16 August 2023, 1pm-2pm AEST (Sydney time)

Speaker: Lecturer, Hammond Pearce, UNSW, https://www.cyberhammond.com/

Recording: link

Slides: HammondPearce_Data61_2023_Bugs_Begin_Bugs_Begone

Title: Bugs Begin, Bugs Begone: Large Language Models and Code Security

Abstract: Human developers can produce code with cybersecurity bugs. Do emerging ‘smart’ code completion tools such as GitHub Copilot also write these bugs, and if so, is it possible to prompt them to fix the bugs instead? In this research, we explore the cybersecurity implications and applications of Large Language Models (LLMs) for code. We measure the rate at which they produce bugs, and separately, investigate their application for zero-shot vulnerability repair. We investigate challenges in the design of prompts that both steer bugs towards and away from generating insecure code, a difficult exploration due to the numerous ways to phrase key information – both semantically and syntactically – with natural languages. We perform a large-scale study of five commercially available, black-box, “off-the-shelf” LLMs, as well as an open-source model and our own locally-trained model, on a mix of synthetic, hand-crafted, and real-world security bug scenarios. Our experiments demonstrate that LLMs do produce vulnerable code, but they may also repair such bugs. A qualitative evaluation of the model’s performance over a corpus of historical real-world examples highlights challenges in this area.

Bio: Dr. Hammond Pearce is a Lecturer in UNSW’s School of Computer Science and Engineering. Previously he worked at NYU’s Department of Electrical and Computer Engineering / NYU Center for Cybersecurity as a Research Assistant Professor, and at NASA Ames on a research internship. In the commercial world, he has worked as a contractor on Li-ion battery management and as a full-stack web developer. His research interests lie in cybersecurity and hardware and embedded systems design, as well as the intersection of AI and industrial informatics in this area – in particular, Hammond is passionate about exploring the future of the design process in the hardware and firmware spaces, which involves the investigation of tools like ChatGPT and other Large Language Models and how they impact the development lifecycle. As part of his research work, he recently won the inaugural Efabless AI Generated Open-Source Silicon Design Challenge, and also previously won the Distinguished Paper Award at IEEE Symposium on Security and Privacy in 2022. Hammond obtained his Ph.D. in 2020 from the University of Auckland, New Zealand, with his thesis “Model Driven Engineering for Safety and Security in Industry 4.0”.


  • Seminar date/time: Thursday, 24 August 2023, 10am-11am AEST (Sydney time) CANCELLED

Speaker: Assistant Professor, Yizheng Chen, University of Maryland

https://surrealyz.github.io/

Title: Continuous Learning for Android Malware Detection

Abstract: Machine learning is a powerful tool to detect Android malware with high accuracy. However, it suffers from the problem of concept drift: benign and malicious behavior changes over time, and current machine learning models have difficulty keeping up with this change and rapidly become ineffective. Concept drift happens for many reasons. For example, malware authors may add malicious functionalities to evade detection, or create new types of malware that’s never seen before, and benign apps release updates to utilize new features provided by the Android SDK. Our research finds that, after training an Android malware classifier on one year’s worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples. In this talk, I will present new methods to make machine learning for Android malware detection more effective. I will show how to make malware classifiers robust against concept drift. Since machine learning technique needs to be continuously deployed, we use active learning: we select new samples for analysts to label, and then add the labeled samples to the training set to retrain the classifier. Our key idea is, similarity-based uncertainty is more robust against concept drift. Therefore, we combine contrastive learning with active learning. We propose a new hierarchical contrastive learning scheme, and a new sample selection technique to continuously train the Android malware classifier. Our results show that given the same analyst labeling budget to retrain the classifier, we can reduces the false negative rate from 14% (for the best baseline) to 9%, while also reducing the false positive rate (from 0.86% to 0.48%); and to maintain a steady F1 score over time, we can achieve 8X reduction in labels indeed.

Bio: Yizheng Chen is an Assistant Professor of Computer Science at University of Maryland. She works at the intersection of AI and security. Her research focuses on AI for Security and robustness of AI models. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship. Homepage: https://surrealyz.github.io/


  • Date/time: Thursday, 22nd June 10 am – to – 11 am AEST

Speaker: Shawn Riley, Senior Cybersecurity Scientist, Defence and Intelligence Community, USA

Recording: link

Slides:CSIRO-Keynote-Riley-June2023[19]

Title: Integrated Adaptive Cyber Defense and the Importance of Knowledge Representation & Reasoning

Abstract: In an era of increasingly complex cyber threats, our presentation explores the transformative role of Integrated Adaptive Cyber Defense (IACD) and the critical part played by AI and automation in its implementation. We delve into the spectrum from data to wisdom, underlining the significance of formalized knowledge representation and reasoning in cybersecurity operations. We elaborate on the four levels of interoperability – foundational, structural, semantic, and organizational – within IACD, demonstrating how each level benefits from the integration of AI and automation. Through practical examples, we illustrate AI and automation applications across each interoperability level, emphasizing their potential to expedite detection and response to threats, and to enhance decision-making processes. Despite the challenges that lie ahead, we project a future where IACD, enriched by AI and automation, becomes the benchmark for cybersecurity. The talk concludes by outlining potential avenues for further research, primarily focused on advancing AI and automation technologies, refining knowledge representation techniques, and developing seamless integration methods for cybersecurity systems.

Bio: Shawn is a well seasoned Cybersecurity Scientist and thought leader from the Defense and Intelligence community who transitioned to the Information Security industry after 20 years in the Intelligence Community. Shawn has 30+ years of overall experience in Information Security and Cybersecurity roles, including diverse c-suite & technical leadership experience. Since 2011, Shawn has been applying his subject matter expertise at companies building explainable artificial intelligence such as expert systems, hybrid AI, and intelligent cyber digital twins for threat susceptibility assessments, vulnerability assessments, cyber risk assessments and management, cyber resiliency, integrated adaptive cyber defense, threat intelligence, cyber situational awareness, extended detection & response automation, and effects-based courses of action. Shawn spent his 20s in the US Navy’s Cryptology Community, focused on Information Operations and Information Assurance. Shawn was named a Lockheed Martin Fellow in his late 30s as a Cybersecurity Scientist. Lockheed Martin Fellow is a role/title restricted to the top 1% of all the company’s engineers, scientists, and technologists. In his early 40s, Shawn was selected as a Lockheed Martin Senior Fellow, a role/title restricted to the top 0.1%. Shawn is neurodivergent with Asperger’s Syndrome (AS) / Autism Spectrum Disorder (ASD) with a Myers-Briggs personality type of INTJ / MASTERMIND. Diagnosed in his late 30s, Shawn runs on a different operating system.


  • Seminar date/time:  Wednesday, 17 May 2023, 10-11AM AEST (Sydney time) 

Recording: https://webcast.csiro.au/#/videos/b9fb69a6-fa43-4a2b-a776-d7a195dc98e6

Slides: mem trust first slide

Title: Memorisation, Trust, and Big Models

Speaker: Matthew Jagielski, Research Scientist, Google DeepMind

https://jagielski.github.io/

Abstract: Models tend to get better with scale, but in this talk we’ll be talking about two problems that seem to get worse, or at least harder to deal with, at scale: memorization and trust. We’ll discuss recent work on memorization in language and diffusion models, as well as recent work showing how both centralization and decentralization can corrupt large models.

Bio: Matthew Jagielski is a research scientist at Google DeepMind, where he works on the intersection of security, privacy, and machine learning. He received his PhD in computer science from Northeastern University, where he was advised by Alina Oprea and Cristina Nita-Rotaru.


  • Seminar date/time: Wednesday, 26 April 2023, 1-2PM AEST (Sydney time)

Recording: recording

Slides: Xinyun_talk_adv+llm

https://jungyhuk.github.io/

Title: Adversarial Learning Meets Large Language Models

Abstract: Large language models have achieved impressive performance on various natural language processing tasks, and can be adapted to accomplish tasks that require multi-modal data. However, the robustness and safety of these models are still not well understood. In this talk, I will discuss my recent works on investigating different aspects of robustness issues of large language models, and connect them to the literature of adversarial machine learning. We demonstrate that many common vulnerabilities of deep neural networks before the era of foundation models still persist in large language models, such as the sensitivity to input variations negligible by humans. On the other hand, new types of attacks have been crafted specially for large language models, including prompt injection attacks.

Bio: Xinyun Chen is a senior research scientist in the Brain team of Google Research. She obtained her Ph.D. in Computer Science from University of California, Berkeley. Her research lies at the intersection of deep learning, programming languages, and security. Her research focuses on large language models, learning-based program synthesis and adversarial machine learning. She received the Facebook Fellowship in 2020, and Rising Stars in Machine Learning in 2021. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and her work AlphaCode was featured as the front cover in Science Magazine.


  • Seminar date/time: Wednesday, 22 March 2023, 1-2 pm (Sydney time)

Title: “Hark: A Deep Learning System for Navigating Privacy Feedback at Scale”

Recording: link

Slides: Hark IEEE S&P 2022_ Cybersecurity CRC and CSIRO SAO Seminar

Speaker: Sai Teja Peddinti, Google Research, Staff Research Scientist, psaiteja@google.com, https://sites.google.com/site/psaiteja/home

Abstract:  Integrating user feedback is one of the pillars for building successful products. However, this feedback is generally collected in an unstructured free-text form, which is challenging to understand at scale. This is particularly demanding in the privacy domain due to the nuances associated with the concept and the limited existing solutions. In this work, we present Hark, a system for discovering and summarizing privacy-related feedback at scale. Hark automates the entire process of summarizing privacy feedback, starting from unstructured text and resulting in a hierarchy of high-level privacy themes and fine-grained issues within each theme, along with representative reviews for each issue. At the core of Hark is a set of new deep learning models trained on different tasks, such as privacy feedback classification, privacy issues generation, and high-level theme creation. We illustrate Hark’s efficacy on a corpus of 626M Google Play reviews. Out of this corpus, our privacy feedback classifier extracts 6M privacy-related reviews (with an AUC-ROC of 0.92). With three annotation studies, we show that Hark’s generated issues are of high accuracy and coverage and that the theme titles are of high quality. We illustrate Hark’s capabilities by presenting high-level insights from 1.3M Android apps.

Bio: Sai Teja Peddinti is a Staff Research Scientist in the Privacy Research group at Google. His current research focuses on applying machine learning techniques to build novel privacy and security features, and performing large scale measurements and analysis to understand user preferences/concerns and to evaluate effectiveness of existing features. Previously, he has interned at Alcatel Lucent Bell Labs and worked on a combined project of UC Berkeley and Microsoft Research. He completed his Ph.D. in Computer Science from NYU in 2014 and his Bachelors from DA-IICT, India in 2009. His research appeared in top conferences, won IAPP SOUPS Privacy Award 2017, and has been selected as a finalist in NYU CSAW Applied Research Competition 2022.


  • Date: Wednesday, 22 Feb 2022, 1-2pm AEST (Sydney time).

Title: Flocking to Mastodon: Tracking the Great Twitter Migration

Recording link: https://webcast.csiro.au/#/videos/46dd8cbe-fbf3-403e-b7dc-04b93e9a1bf0

Speaker: Assistant Professor: Gareth Tyson, Hong Kong University of Science and Technology

http://www.eecs.qmul.ac.uk/~tysong/

Abstract: On October 27, 2022, Elon Musk acquired the world’s largest micro-blogging platform, Twitter. As a self-proclaimed “free speech absolutist”, this was a controversial and highly publicised event. The acquisition led to a series of chaotic events. As a consequence, Twitter experienced a mass migration of users. One of the recipient platforms has been Mastodon, a decentralized microblogging service. This presentation will discuss our measurements of the migration.

Bio: Gareth Tyson is an Assistant Professor at Hong Kong University of Science and Technology. He regularly publishes in venues such as SIGCOMM, SIGMETRICS, WWW, INFOCOM, CoNEXT and IMC, alongside various top-tier IEEE/ACM Transactions. Over the last 5 years, he has been awarded over £5 million in research funding and has received coverage from news outlets such as BBC, Washington Post, CNBC, New Scientist, MIT Tech Review, The Times, Slashdot, Daily Mail, Wired, Science Daily, Ars Technica, The Independent, Business Insider, The Register, as well as being interviewed on both TV and Radio. He regularly serves on numerous organising and program committee member for conferences such as ACM SIGCOMM, ACM SIGMETRICS, ACM IMC, ACM WWW, ACM CoNEXT, IEEE ICDCS and AAAI ICWSM


  • Date: 9/2/23 15.00-16.00 Sydney time

Speaker: Dr Yinhao Jiang, Postdoctoral research fellow in Cyber Security at the Charles Sturt University.

Title: Statistical Aggregation with Local Differential Privacy

Recording: https://webcast.csiro.au/#/videos/c85fe082-253a-40cb-85e7-24392d47b5db

Slides: not available

Abstract: Collecting data from clients, or data crowd-sourcing has recently been a common practice of companies to understand the clients’ insights to improve services and products. In compliance with enacted privacy laws and regulations, companies need to protect client privacy, or user privacy, when handling user data. Local differential privacy (LDP) is an emerging privacy-preserving approach that guarantees user privacy by perturbing users’ data at their locations while maintaining the users statistics to be accurate. The Local differential privacy model overcomes the limitation of existing privacy preserving models by not requiring the data collector to be trusted in protecting user privacy. This survey aims to help practitioners to understand and make use of Local differential privacy protection in their data collection practices. We provide a structured and application-oriented review of existing Local differential privacy algorithms for aggregating user statistics. We present brief algorithmic descriptions of statistical aggregation algorithms with Local differential privacy categorized based on their computed statistics and LDP achievement approach. We also discuss the advantages and disadvantages of the algorithms and highlight potential challenges for their practical applications.

Bio: Yinhao Jiang is a Postdoctoral research fellow in Cyber Security at the Charles Sturt University. He received his Ph.D. in Cryptography from the University of Wollongong. He is currently focusing on applied cryptography regarding privacy-enhancing technologies. His research interests also include statistical technology tools for privacy evaluation.


  • Seminar date/time: Thursday, 15th December, at 4pm to 5pm AEDT

Speaker: Associate Professor Giampaolo Bella, University of Catania, ITALY https://www.dmi.unict.it/giamp/

Title: Out to explore the cybersecurity planet

Recording: https://webcast.csiro.au/#/videos/9e953ee4-b2f5-4127-9438-c4e08dabf395

Slides:giamp_2022AU

Abstract: Purpose – Security ceremonies still fail despite decades of efforts by researchers and practitioners. Attacks are often a cunning amalgam of exploits for technical systems and of forms of human behaviour. For example, this is the case with the recent news headline of a large-scale attack against Electrum Bitcoin wallets, which manages to spread a malicious update of the wallet app. The author therefore sets out to look at things through a different lens. Design/methodology/approach – The author makes the (metaphorical) hypothesis that humans arrived on Earth along with security ceremonies from a very far planet, the Cybersecurity planet. The author’s hypothesis continues, in that studying (by huge telescopes) the surface of Cybersecurity in combination with the logical projection on that surface of what happens on Earth is beneficial for us earthlings. Findings – The author has spotted four cities so far on the remote planet. Democratic City features security ceremonies that allow humans to follow personal paths of practice and, for example, make errors or be driven by emotions. By contrast, security ceremonies in Dictatorial City compel to comply, hence humans here behave like programmed automata. Security ceremonies in Beautiful City are so beautiful that humans just love to follow them precisely. Invisible City has security ceremonies that are not perceivable, hence humans feel like they never encounter any. Incidentally, the words “democratic” and “dictatorial” are used without any political connotation. Originality/value – A key argument the author shall develop is that all cities but Democratic City address the human factor, albeit in different ways. In the light of these findings, the author will also discuss security ceremonies of our planet, such as WhatsApp Web login and flight boarding, and explore room for improving them based upon the current understanding of Cybersecurity.

Bio: Giampaolo Bella is Associate Professor at the University of Catania, doing teaching and research in Cybersecurity and Formal Methods. After his Ph.D. from Cambridge University, he was a research associate at TU Munich, Cambridge University, and a senior researcher at SAP Research France. His recent results lie in the areas of automotive security, offensive security and socio-technical aspects of these.


  • Seminar date/time: Wednesday, 23 Nov, at 11:00am to 12:00pm AEDT

Speaker: Professor Kwok-Yan LAM, Nanyang Technological University, Singapore, https://personal.ntu.edu.sg/kwokyan.lam/

Title: Digitalization, Digital Trust and TrustTech

Recording: https://webcast.csiro.au/#/videos/05b8319c-9449-4831-8f0e-74e4f8689b99

Slides: Not available

Abstract: The rapid adoption of digitalization in almost all aspects of economic activities has led to serious concerns in security, privacy, transparency and fairness issues of digitalized systems. These issues will result in negative impacts on people’s trust in digitalization, which need to be addressed in order for organizations to reap the benefits of digitalization. The typical value proposition of digitalization such as elevated operational efficiency through automation and enhanced customer services through customer analytics require the collection, storage and processing of massive amount of user data, which are typical cause for data governance issues and concerns on cybersecurity, privacy and data misuses. AI-enabled processing and decision-making also lead to concerns on algorithm bias and distrust in digitalization. In this talk, we will brief review the motivation of digitalization, discuss the trust issues in digitalization, and introduce the emerging areas of Trust Technology which is a key enabler in developing and growing the digital economy.

Bio: Professor Lam is the Associate Vice President (Strategy and Partnerships) and Professor in the School of Computer Science and Engineering at the Nanyang Technological University (NTU), Singapore. He is concurrently serving as Executive Director of the National Centre for Research in Digital Trust (DTC), Director of the Strategic Centre for Research in Privacy-Preserving Technologies and Systems (SCRIPTS), and Director of NTU’s SPIRIT Smart Nation Research Centre. From August 2020, Professor Lam is also serving as a Consultant to the INTERPOL. In 2012, he co-founded Soda Pte Ltd which won the Most Innovative Start Up Award at the RSA 2015 Conference. Prof Lam received his B.Sc. (First Class Honours) from the University of London in 1987 and his Ph.D. from the University of Cambridge in 1990. Professor Lam has been an active Cybersecurity researcher since 1980s. His research interests include Distributed and Intelligent Systems, Multivariate Analysis for Behavior Analytics, Cyber-Physical System Security, Distributed Protocols for Blockchain, Biometric Cryptography, Homeland Security, Cybersecurity and Privacy-Preserving Techniques. Prof Lam is the recipient of the 2022 Singapore Cybersecurity Hall of Fame Award.


  • Seminar day and time: Friday 18/11/2022, 10:00-11:00 AEST,

Speaker: Bo Li, Assistant Professor, Department of Computer Science, University of Illinois at Urbana–Champaign

Recording: https://webcast.csiro.au/#/videos/2c3e0b8b-55b9-4841-9b37-fadffc5d8935

Slides: not available

Title: ‘Trustworthy Machine Learning: Robustness, Privacy, Generalization, and their Interconnections’

Abstract: Advances in machine learning have led to the rapid and widespread deployment of learning-based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same, or similar, distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shift could also lead to other trustworthiness issues such as generalization. In this talk, I will describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. I will focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives in a holistic view.

Bio: Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean’s Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times. http://boli.cs.illinois.edu/


  • Seminar date and time: Thursday, 10 Nov 2022. 10-11am AEDT

Title: Mis/disinformation Panel

Recording: https://webcast.csiro.au/#/videos/48a32305-6d21-4778-8875-f69fc38d7827

Abstract: Mis/disinformation poses a significant threat to liberal democracies, including Australia. The dangers from mis/disinformation range from undermining social trust in government authorities to orchestrating election interference, resulting in a decline in the integrity of our democratic system. This interdisciplinary panel discussion explores Australia’s efforts to protect its democratic institutions and the Australian society more broadly against mis/disinformation. Panellists will address issues ranging from election interference to radicalisation, social polarisation and the sovereign citizen movement, and information warfare and the dangers mis/disinformation poses to national security. The panel will also address methods to improve Australia’s strategic interventions to mitigate the harms of mis/disinformation, including gaps and problems in AI research to combat mis/disinformation.

Speakers:

  • Prof Marilyn McMahon, Deakin University, marilyn.mcmahon@deakin.edu.au

Marilyn McMahon is a Professor of Criminal Law and Deputy Dean in the Faculty of Business and Law at Deakin University, as well as a registered psychologist. Her research focuses on the intersection of criminal law and mental health issues, including deception detection.

  • A/Prof Wayne Wobcke, UNSW Sydney, w.wobcke@unsw.edu.au

Wayne Wobcke is an Associate Professor in the School of Computer Science and Engineering at UNSW. His research covers a range of topics in Artificial Intelligence and he leads the research group on Artificial Intelligence for Social Good.

  • A/Prof Shiri Krebs, Deakin University, Cyber Security CRC, s.krebs@deakin.edu.au

Shiri Krebs is an Associate Professor in the Faculty of Business and Law at Deakin University. She is also the Co-Lead of the Law and Policy Theme at the Cyber Security Cooperative Research Centre, the Chair of the International Lieber Society on the Law of Armed Conflict, and an affiliate scholar at the Stanford Centre on International Security and Cooperation. Her research focuses on predictive technologies in military and counterterrorism decision-making processes.

  • Dr Jayson Lamchek, Deakin University, Cyber Security CRC, j.lamchek@deakin.edu.au

Jayson Lamchek is a Research Fellow at the Cyber Security Cooperative Research Centre and Deakin University. He is an interdisciplinary human rights scholar and his current research lies in the intersection of human rights and new technology, exploring legal and ethical aspects of technology development and cyber-mediated social change.

Panel Chair: A/Prof Shiri Krebs, s.krebs@deakin.edu.au

Hosts: shuo.wang@data61.csiro.au, zhi.zhang@data61.csiro.au


  • Seminar Day and time: Friday, 21 October, 10:00-11:00 AEST ,

Speaker: Mengjia Yan, Assistant Professor, MIT, https://people.csail.mit.edu/mengjia/

Recording: https://webcast.csiro.au/#/videos/55953721-af21-4810-84fb-ee40d423e9db

Slides: Mengjia’s slides

Title: Software and Hardware Side-Channel Security in Modern Systems

Abstract: Modern systems are becoming increasingly complex, exposing a large attack surface with vulnerabilities in both software and hardware. Today, it is common for security researchers to explore software and hardware vulnerabilities separately, considering the two vulnerabilities in two disjoint threat models. In this talk, I will discuss the research efforts in my group on studying the security threats arising from the intersections of software and hardware layers. First, I will talk about how a hardware attack can be used to assist a software attack in bypassing a strong security defence mechanism. Specifically, I will describe the PACMAN attack, demonstrating that by leveraging speculative execution attacks, an attacker can bypass ARM Pointer Authentication to conduct a control-flow hijacking attack. Second, I will talk about an in-depth security analysis of the state-of-the-art micro-architectural side-channel attacks. We show an attack that was claimed to exploit side channels via cache contention, actually exploiting system interrupts.

Bio:  Mengjia Yan is an Assistant Professor in the EECS department at MIT. She received her Ph.D. degree from the University of Illinois at Urbana-Champaign (UIUC). Her research interest lies in the areas of computer architecture and hardware security, with a focus on side-channel attacks and defences.


  • Seminar day and time: Thursday, 13th Oct 2022, 14:00-15:00 AEST

Speaker: Kristen Moore, Senior Research Scientist at CSIRO’s Data61

Recording: https://webcast.csiro.au/#/videos/a1662840-df95-42a2-a760-6d7f5ed1a281

Slides: OctSAOInternal

Title: ML Enabled Cyber Deception

Abstract: Cyber Deception is increasingly valuable as a cyber security tool for breach detection, theft discovery, and threat intelligence.  The key to successful deception is realistic mimicry of the digital world, so as to entice adversaries to interact with the decoy content, which springs the trap. This talk will outline how our team have leveraged generative machine learning models to automate and scale the generation of realistic (but fake) content and behaviour for use in cyber deception.

Bio: Kristen Moore is a Senior Research Scientist at CSIRO’s Data61. Her research interests are in the use of AI to augment cyber defence capability, with a focus on cyber deception and the generation of fake cyber artefacts. She was the technical lead for the Cyber Security CRC project “Deception as a Service” and is currently the technical lead for advancing AI in the Cyber Security CRC project “Augmenting Cyber Defence Capability”. She was also a finalist for the Women in AI Australia/NZ awards in Cyber Security in 2022. Kristen completed her PhD in mathematics in 2012 at the Max Planck Institute for Gravitational Physics and the Free University Berlin, in Germany. She then held postdoctoral positions at the Mathematical Sciences Research Institute at UC Berkeley, and at Stanford University. In 2014 she joined Gro Intelligence, an agriculture-tech startup company in New York, which has since grown to be named one of Time Magazine’s 100 Most Influential Companies of 2021. In 2017 she joined Telstra, where she led a team to develop and deploy a collaborative Human-AI customer support system that was used by over 1,000 Telstra customer support staff. Since joining CSIRO in 2020 she has filed an international patent application and published in top venues including IEEE Euro S&P and IEEE TPDS.


  • Seminar date/time: Wednesday, 21 Sep, at 10:00am to 11:00am AEST

Speaker: Pin-Yu Chen, Principal Research Scientist, IBM Research AI; MIT-IBM Watson AI Lab https://sites.google.com/site/pinyuchenpage/home

Recording: https://webcast.csiro.au/sharevideo/1c47354d-6879-45fc-adfa-d03f65eb6cd8

Slides: Pinyu’s slides

Title: AI Model Inspector: Towards Holistic Adversarial Robustness for Deep Learning

Abstract: In this talk, I will share my research journey toward building an AI model inspector for evaluating, improving, and

exploiting adversarial robustness for deep learning. I will start by providing an overview of research topics concerning adversarial robustness and machine learning, including attacks, defenses, verification, and novel applications. For each topic, I will summarize my key research findings, such as (i) practical optimization-based attacks and their applications to explainability and scientific discovery; (ii) Plug-and-play defenses for model repairing and patching; (iii) attack-agnostic robustness assessment; and (iv) data-efficient transfer learning via model reprogramming. Finally, I will conclude my talk with my vision of preparing deep learning for the real world and the research methodology of learning with an adversary.  More information about my research can be found at www.pinyuchen.com

Bio: Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems.  At IBM Research, he received the honor of IBM Master Inventor and several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research works contribute to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Papper Runner-Up Award.


  • Seminar date and time: Thursday, 8th Sep 2022. 3-4pm AEST Sydney time.

Speaker: Ahmed Ibrahim, Lecturer at ECU

Slides :Ahmed-Slides

Recording:https://webcast.csiro.au/#/videos/cbcfac90-7032-4de7-858d-5c3658f8aebb

Title: Improving critical infrastructure security

Abstract: Critical infrastructure security is vital to protect essential services we rely upon and if compromised, could have dire consequences on a nation’s economy, physical security, or public’s health and safety. Defending against cyber attacks from criminal and state actors quickly is challenging as incident response involves both technology and humans working effectively. Ahmed will talk about challenges specific to critical infrastructure security and ongoing work related to improving incident response capability, identity management and data sharing.

Bio: Dr Ahmed Ibrahim is a lecturer in cyber security at Edith Cowan University (ECU) and a researcher at the ECU Security Research Institute. His research is aimed at tackling cyber security problems using a multi-disciplinary focus in areas related to critical infrastructure and Internet of Things (IoT), and cyber security risks in organisations. He frequently gives talks at national and international venues. He has successfully secured external grants from the Government of Western Australia and international research partners. He has had industry engagements on various projects from federal, state, local government, and critical infrastructure providers.


  • Seminar date/time: Thursday, 11 Aug, at 3:00pm to 4:00pm AEST

Speaker: Prof. Robert Deng, Singapore Management University. http://www.mysmu.edu/faculty/robertdeng/

Title: Achieving Cloud Data Security and Privacy in Zero Trust Environments

Recording:https://webcast.csiro.au/#/videos/f004a59e-927c-453c-8384-09abf40022aa

Slides:Slides – Robert Deng

Abstract: This talk will provide an overview on the design and implementation of a system for secure access control, search, and computation of encrypted data in the cloud for enterprise users. The system is designed following the “zero trust” paradigm to protect data security and privacy even if cloud storage servers or user accounts are compromised. This is achieved using end-to-end (E2E) encryption in which encryption and decryption operations only take place at client devices. However, encryption must not hinder access, search and even computation of data by authorized users. There are numerous academic publications in this area and the choice of which cryptographic techniques to use could have significant impact on the system’s scalability, efficiency and usability. We will share our experience in the design of the system architecture and selection of cryptographic techniques with a consideration to balance security, performance, and usability.

Bio: Robert Deng is AXA Chair Professor of Cybersecurity, Director of the Secure Mobile Centre, and Deputy Dean for Faculty & Research, School of Computing and Information Systems, Singapore Management University (SMU). His research interests are in the areas of data security and privacy, network security, and applied cryptography.  He received the Outstanding University Researcher Award from National University of Singapore, Lee Kuan Yew Fellowship for Research Excellence from SMU, and Asia-Pacific Information Security Leadership Achievements Community Service Star from International Information Systems Security Certification Consortium. He serves/served on the editorial boards of ACM Transactions on Privacy and Security, IEEE Security & Privacy, IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Information Forensics and Security, Journal of Computer Science and Technology, and Steering Committee Chair of the ACM Asia Conference on Computer and Communications Security. He is a Fellow of IEEE and Fellow of Academy of Engineering Singapore.


  • Seminar date/time: Friday, 29 July, at 10:00 am AEST (5pm on Thursday, July 28 PDT)

Speaker: Dr Herbert Lin, Stanford University, US. https://cisac.fsi.stanford.edu/people/herbert_lin

Title: Innovation as the Driver of Long-Term Cyber Insecurity

Recording: https://webcast.csiro.au/#/videos/524c5fcd-2312-4d2c-97e1-17678237c976

Slides:Herb-slides

Abstract: The appetite in modern society for increased functionality afforded by information technology is unlimited.  Increased functionality of information technology necessarily entails increased complexity of design and implementation.  But complexity is a fundamental driver of insecurity and unreliability in digital systems.  Thus, over the long term, a boundless demand for greater functionality leads to increasingly insecure systems—which is why it is impossible to get ahead of the cybersecurity threat.  Some ways to mitigate the tradeoff between innovation and security will be discussed.

Bio: Herbert Lin is senior research scholar and Hank J. Holland Fellow at Stanford University.  His research interests focus on the policy-related dimensions of offensive operations in cyberspace as instruments of national policy and the security dimensions of information warfare and influence operations.  He is also Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies and a member of the Science and Security Board of the Bulletin of Atomic Scientists. In 2016, he served on President Obama’s Commission on Enhancing National Cybersecurity.  In 2019, he was elected a fellow of the American Association for the Advancement of Science.  In 2020, he was a commissioner on the Aspen Commission on Information Disorder.  Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990).  He received his doctorate in physics from MIT.


  • Seminar date/time: Wednesday 20th July 2022. 1-2pm AEST  

Speaker: Prof. Tansu Alpcan, The University of Melbourne, Australia. http://www.tansu.alpcan.org 

Recording: https://webcast.csiro.au/#/videos/398b3fcb-2733-49f9-a81c-bf687a5dd5fb

Slides: Alpcan-slides

Title: Cyber-Physical System Security and Adversarial Machine Learning 

Abstract: As cyber-physical systems become prevalent in safety-critical areas, such as autonomous vehicles, there is an increasing need for protecting them against malicious adversaries. Deep learning methods are expected to play an important role in detecting and countering malicious attacks. However, these powerful algorithms themselves can be targeted by advanced adversaries, which has led to the emergence of “adversarial machine learning” as a research field. This talk will present an overview of our group’s latest research results on the cyber-physical system (CPS) security and adversarial machine learning. The first part will focus on how physics-enhanced adversarial learning can help secure networked autonomous car platoons. The second part will present how coding (information) theory can improve the robustness of deep learning in general with a principled, multi-dimensional approach. The talk will conclude with a brief discussion on our ongoing game-theoretic work and future research directions. 

Bio: Tansu Alpcan received a PhD degree in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign (UIUC) in 2006. His research interests include the game, optimisation, control theories, and machine learning applications to security and resource allocation problems in communications, smart grids, and the Internet of Things. He chaired or was an Associate Editor, TPC chair, or TPC member of several prestigious IEEE workshops, conferences, and journals. Tansu Alpcan is the (co-)author of more than 150 journal and conference articles as well as the book “Network Security: A Decision and Game-Theoretic Approach” published by Cambridge University Press (CUP) in 2011. He co-edited the book “Mechanisms and Games for Dynamic Spectrum Allocation” published by CUP in 2014. He has worked as a senior research scientist in Deutsche Telekom Laboratories, Berlin, Germany (2006-2009), and as Assistant Professor (Juniorprofessur) at Technical University Berlin (2009-2011). Tansu is currently with the Dept. of Electrical and Electronic Engineering at The University of Melbourne as a Professor and Reader. 


  • Seminar date/time: Friday  27th May 2022. 10-11am AEST

Speaker: Prof.  David L. Sloss, Professor of Law at Santa Clara University, US

Title: Tyrants on Twitter: Protecting Democracies from Information Warfare.

SlidesDavid

Recording:https://webcast.csiro.au/#/videos/4af60f5d-c2ef-43b0-807c-a8c4231256cc

Abstract: Tyrants on Twitter explores new ways to mitigate online disinformation and to regulate content on social media platforms to improve the flow of information and strengthen democratic principles.

Sloss calls for cooperation among democratic governments to create a new transnational system for regulating social media to protect Western democracies from information warfare. Drawing on his professional experience as an arms control negotiator, he outlines a novel system of transnational governance that Western democracies can enforce by harmonizing their domestic regulations. And drawing on his academic expertise in constitutional law, he explains why that system—if implemented by legislation in the United States—would be constitutionally defensible, despite likely First Amendment objections. This book is essential reading in a time when disinformation campaigns threaten to undermine democracy.

Bio: David L. Sloss is the John A. and Elizabeth H. Sutro Professor of Law at Santa Clara University. He is the author of The Death of Treaty Supremacy: An Invisible Constitutional Change (Oxford Univ. Press, 2016) and Tyrants on Twitter: Protecting Democracies from Information Warfare (Stanford Univ. Press, forthcoming 2022). He is the co-editor of International Law in the U.S. Supreme Court: Continuity and Change (Cambridge Univ. Press, 2011) and sole editor of The Role of Domestic Courts in Treaty Enforcement: A Comparative Study (Cambridge Univ. Press, 2009). He has also published several dozen book chapters and law review articles. His book on the death of treaty supremacy and his edited volume on international law in the U.S. Supreme Court both won prestigious book awards from the American Society of International Law. Professor Sloss is a member of the American Law Institute and a Counsellor to the American Society of International Law. His scholarship is informed by extensive government experience. Before entering academia, he spent nine years in the federal government, where he worked on U.S.-Soviet arms control negotiations and nuclear proliferation issues.


  • Seminar date and time: Thursday, 9th June 2022. 3-4pm AEST Sydney time

Speaker: Dr Meisam Mohammady

Title: Novel approaches to preserving utility in privacy enhancing technologies

Slides: CSCRCPPT

Recording:https://webcast.csiro.au/#/webcasts/innovationasthedriver

Abstract: Significant amount of individual information is being collected and analysed through a wide variety of applications across different industries. While pursuing better utility by discovering knowledge from the data, individuals’ privacy may be compromised during an analysis: corporate networks monitor their online behaviour, advertising companies collect and share their private information, and cybercriminals cause financial damages through security breaches. To address this issue, the data typically goes under certain anonymization techniques, e.g., Property Preserving Encryption (PPE) or Differential Privacy (DP). Unfortunately, most such techniques either are vulnerable to adversaries with prior knowledge, e.g., adversaries who fingerprint the network of a data owner, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. Therefore, the fundamental trade-off between privacy and utility (i.e., analysis accuracy) has attracted significant attention in various settings and scenarios. In line with this track of research, we aim to build utility-maximized and privacy-preserving tools for Internet communications. Such tools can be employed not only by dissidents and whistleblowers, but also by ordinary Internet users on a daily basis. To this end, we combine the development of practical systems with rigorous theoretical analysis, and incorporate techniques from various disciplines such as computer networking, cryptography, and statistical analysis. This presentation covers two different frameworks in some well-known settings. First, I will present the Multi-view approach which preserves both privacy and utility of data in network trace anonymization. Second, I will present the DPOAD (Differentially Private Outsourcing of Anomaly Detection) approach which is a framework enabling privacy preserving anomaly detection in an outsourcing setting.

Bio: Meisam is an active Research Scientist in CSIRO Data61. Meisam’s research focuses on ethical and secure machine learning (private, fair and certifiably robust to adversaries), differential privacy, privacy preserving cloud security auditing and security issues pertaining to Internet of Things (IoT). He earned his PhD from the Concordia Institute for Information Systems Engineering (CIISE) at Concordia University, his MSc from the Department of Electrical Engineering at Ecole Polytechnique Montreal, and his BS from the Department of Electrical Engineering at Sharif University of Technology. He has had several collaborations in terms of research and supervision with both academia and industry such as the Department of Computer Science at the Illinois Institute of Technology (IIT), the University of New South Wales (UNSW), the University of Sydney and Ericsson Research Canada. Meisam has co-authored several papers in top-tier security journals and conferences, and his PhD dissertation has won the Distinguished PhD Dissertation Awards in the category of Engineering and Natural Science PhD dissertations and selected as Concordia University’s nominee for both Canada-wide CAGS and ADESAQ competitions.


  • Seminar date and time: 12th May 2022. 3-4pm AEST Sydney time.

Recording: https://webcast.csiro.au/#/videos/0b3094e5-66b1-4660-a4b2-5d3502db3e32

Slides: CREST_CSCRC_POKAPS_seminar-2022_2

Title: Patching and updating impact estimation

Abstract: Due to ever-changing user demands modern dynamic software systems are in constant need to be updated and tailored accordingly. At the same time, the service interruptions commonly caused by traditional software patching and updating processes may not be acceptable in critical environments. Thus, the interest towards runtime (live) patching is growing, specifically in the security context in an attempt to quickly mitigate potential vulnerabilities. This seminar outlines the existing challenges and solutions in the area of live software patching. In addition, novel current work on update-induced impact calculation technique aiding in failed update recovery is presented and discussed.

Bio: Victor Prokhorenko is a researcher with the Centre for Research on Engineering Software Technologies (CREST) at the University of Adelaide. Victor has more than 17 years of experience in software engineering with main areas of expertise including investigation of technologies related to software resilience, trust management and big data solutions hosted within OpenStack private cloud platform. Victor has obtained a PhD in Computer Science from the University of South Australia.


  • Thursday 28th April 2022. 3-4pm AEST  

Speaker: Assoc Prof. Olya Ohrimenko from University of Melbourne, Australia

Title: Security and Privacy for Machine Learning: Why? Where? and How? 

Recording: Not available

Slides: Not available

Abstract: Machine learning on personal and sensitive data raises privacy concerns and creates potential for inadvertent information leakage. However, incorporating analysis of such data in decision making can benefit individuals and society at large (e.g., in healthcare and transportation). In order to strike a balance between these two conflicting objectives, one has to ensure that data analysis with strong privacy guarantees is deployed and securely implemented. My talk will discuss challenges and opportunities in achieving this goal. I will first describe attacks against not only machine learning algorithms but also naïve implementations of algorithms with rigorous theoretical guarantees such as differential privacy. I will then discuss approaches to mitigate these attack vectors including property-preserving data analysis and data-oblivious algorithms. 

Bio: Olya Ohrimenko is an Associate Professor at The University of Melbourne that she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include data privacy, integrity and security issues that emerge in the cloud computing environment and machine learning applications. She is often involved in the organization of workshops on privacy-preserving machine learning at leading security and machine learning venues. Olya has received solo and joint research grants from Facebook and Oracle and is currently a PI on a joint MURI-AUSMURI grant. She holds a Ph.D. degree from Brown University and a B.CS. (Hons) degree from the University of Melbourne. See https://people.eng.unimelb.edu.au/oohrimenko/ for more information. 


  • Thursday, 7th April, 3-4PM AEDT

Title: Weak-Key Analysis for BIKE Post-Quantum Key Encapsulation Mechanism

Speaker: Dr Syed W. Shah

Recording: https://webcast.csiro.au/#/videos/19139412-7cbd-4dce-bae1-909ac73b885b

Slides: NA

Abstract: The evolution of quantum computers poses a serious threat to contemporary public-key encryption (PKE) schemes. To address this impending issue, the National Institute of Standards and Technology (NIST) is currently undertaking the Post-Quantum Cryptography (PQC) standardization project intending to evaluate and subsequently standardize the suitable PQC scheme(s). One such attractive approach, called Bit Flipping Key Encapsulation (BIKE), has made to the final round of the competition. Despite having some attractive features, the IND-CCA security of the BIKE depends on the average decoder failure rate (DFR), a higher value of which can facilitate a particular type of side-channel attack. Although the BIKE adopts a Black-Grey-Flip (BGF) decoder that offers a negligible DFR, the effect of weak-keys on the average DFR has not been fully investigated. Therefore, in this paper, we first perform an implementation of the BIKE scheme, and then through extensive experiments show that the weak-keys can be a potential threat to IND-CCA security of the BIKE scheme and thus need attention from the research community prior to standardization. We also propose a key-check algorithm that can potentially supplement the BIKE mechanism and prevent users from generating and adopting weak keys to address this issue. 

Bio: Syed W. Shah received his Ph.D. degree in Computer Science and Engineering from the University of New South Wales (UNSW Sydney), Australia, and an M.S. degree in Electrical and Electronics Engineering from the University of Bradford, U.K. He is currently a Research Fellow at Deakin University, Australia. His research interests include pervasive/ubiquitous computing, user authentication/identification, Internet of Things, signal processing, data analytics, privacy, and security.  


Speaker: Professor Yongdae Kim from KAIST, South Korea 

Recording: https://webcast.csiro.au/#/videos/521d1743-771b-41ef-a547-faef3221cd15

Slides:Cellular Testing CSIRO

Title: (Almost) Automatic Testing of Cellular Security 

Abstract: The number of mobile devices communicating through cellular networks is expected to reach 17.72 billion by 2024. Despite this, 3GPP standards only provide positive testing specifications (through conformance test suites) that mostly check if valid messages are correctly handled. This talk summarizes our dynamic and static approach to test the security of both cellular modems and networks automatically. I first introduce LTEFuzz (S&P’19), the first systematic framework to dynamically test if cellular modems and networks  can correctly handle packets that should be dropped according to the standard. Dynamic analysis is then extended with DoLTEst (Usenix Sec’22), which is a downlink fuzzer for cellular baseband. I then introduce BaseSpec (NDSS’21), which performs a comparative static analysis of baseband binary and cellular specification. I will  conclude my talk with future directions for automatic testing.

Bio: Yongdae Kim is a Professor in the Department of Electrical Engineering, and the Graduate School of Information Security at KAIST. He received a PhD degree from the computer science department at the University of Southern California under the guidance of Gene Tsudik in 2002. Before joining KAIST in 2012, he was a professor in the Department of Computer Science and Engineering at the University of Minnesota – Twin Cities for 10 years. He served as a KAIST Chair Professor between 2013 and 2016, and a director of Cyber Security Research Center between 2018 and 2020. He is a program committee chair for ACM WISEC 2022, was a general chair for ACM CCS 2021, and served as an associate editor for ACM TOPS, and a steering committee member of NDSS. His main research interests include novel attacks for emerging technologies, such as drone/self-driving cars, cellular networks and Blockchain. 


  • Time: Thursday March 10th   3-4pm Sydney time AEDT 

Speaker: Dr. Mir Ali Rezazadeh Baee mirali.rezazadeh@qut.edu.au  

Slides: CSCRC_DATA61_2022_Theme1.1

Recording:https://webcast.csiro.au/#/videos/a324f4dd-5676-437a-a4d4-f56db69334b7

Title: Anomaly Detection in Key-Management Activities Using Metadata: Case Study and Framework 

Abstract: Over the last ten years, the use of cryptography to protect enterprise data has grown, with an associated increase in  Enterprise Key-Management System (EKMS) deployment. Such systems are described in the existing literature, including standards (See NIST SP800-57, OASIS KMIP). Metadata analysis techniques have been widely applied in network security to build profiles of normal and anomalous (possibly malicious) behaviour to assist in intrusion detection. However, this approach had not previously been applied to EKMS metadata. Additionally, enterprise encryption tools have been used by attackers to evade detection when performing data exfiltration. This CSCRC research project investigated the use of EKMS metadata as a basis for detection of anomalous behaviour in enterprise networks. We produced datasets containing EKMS metadata, identified relevant metadata elements and developed a framework for anomaly detection based on EKMS metadata analysis. We explored the effectiveness of this approach using a simulated enterprise environment with EKMS deployed. Results show that our framework can accurately detect all anomalous enterprise network activities. 

Bio: Dr. Mir Ali Rezazadeh Baee is a Postdoctoral Researcher in the Cyber Security CRC. Ali has a Ph.D. from Queensland University of Technology (QUT), Brisbane, QLD, Australia. He has a strong focus on applied cryptography and information security, with his doctoral thesis examining authentication and key-management protocols for securing safety critical vehicular communications in a privacy-preserving manner. Ali is a member of the International Association for Cryptologic Research (IACR) and Senior Member of the Institute of Electrical and Electronics Engineers (IEEE), associated with societies including: Computer, Vehicular Technology, Intelligent Transportation Systems and Signal Processing. He has actively served as a reviewer for flagship journals such as IEEE Transactions on Vehicular Technology, IEEE Transactions on Dependable and Secure Computing, and conferences including the IACR’s EUROCRYPT and ASIACRYPT.


  • Time: Thursday March 10th   3-4pm Sydney time AEDT 

Date/time:  February  10th 3-4pm Sydney time AEDT

Speaker: Dr Yinhao Jiang

Title: Privacy Concerns Raised by Pervasive User Data Collection From Cyberspace and Their Countermeasures

Recording https://webcast.csiro.au/#/videos/28d64065-f1e5-46a7-b4ce-56a91ca29bec

Slides NA

Abstract: The virtual dimension called `Cyberspace’ built on internet technologies has served people’s daily lives for decades. Now it offers advanced services and connected experiences with the developing pervasive computing technologies that digitise, collect, and analyse users’ activity data. This changes how user information gets collected and impacts user privacy at traditional cyberspace gateways, including the devices carried by users for daily use. This work investigates the impacts and surveys privacy concerns caused by this data collection, namely identity tracking from browsing activities, user input data disclosure, data accessibility in mobile devices, security of delicate data transmission, privacy in participating sensing, and identity privacy in opportunistic networks. Each of the surveyed privacy concerns is discussed in a well-defined scope according to the impacts mentioned above. Existing countermeasures are also surveyed and discussed, which identifies corresponding research gaps. To complete the perspectives, three complex open problems, namely trajectory privacy, privacy in smart metering, and involuntary privacy leakage with ambient intelligence, are briefly discussed for future research directions before a succinct conclusion to our survey at the end.

Bio: Yinhao Jiang is a Postdoctoral Research Fellow in Cyber Security CRC at the Charles Sturt University. He received the PhD degree on the functional encryption from the University of Wollongong, in 2018. He is currently focusing on functional encryption for privacy-enhancing technologies. His research interests also include IoT anonymity and privacy quantification. Please contact him at yjiang@csu.edu.au.


To register to our mailing list please send an email to sao@csiro.au

For more information contact Co-leaders Jason Xue and Sharif Abuadbba

Past Events