Our SAO seminars
Events organised in collaboration with Cyber Security CRC, supported by the Commonwealth. In those seminars we both showcase External Speakers latest research and Internal CSCRC related research activities. You can find recordings of all our past events down below.
Next monthly events:
MAY |
JUNE |
JULY |
|
EXTERNAL STREAM | Seminar date/time: Wednesday, 17 May 2023, 10-11AM AEST (Sydney time)
Matthew Jagielski, Google DeepMind |
||
INTERNAL STREAM | TBC |
Our External Speakers Stream
Our Internal CSCRC related Research Stream
More to come
If you have missed our latest events:
- Seminar date/time: Wednesday, 17 May 2023, 10-11AM AEST (Sydney time)
Recording: https://webcast.csiro.au/#/videos/b9fb69a6-fa43-4a2b-a776-d7a195dc98e6
Slides: mem trust first slide
Title: Memorisation, Trust, and Big Models
Speaker: Matthew Jagielski, Research Scientist, Google DeepMind
Abstract: Models tend to get better with scale, but in this talk we’ll be talking about two problems that seem to get worse, or at least harder to deal with, at scale: memorization and trust. We’ll discuss recent work on memorization in language and diffusion models, as well as recent work showing how both centralization and decentralization can corrupt large models.
Bio: Matthew Jagielski is a research scientist at Google DeepMind, where he works on the intersection of security, privacy, and machine learning. He received his PhD in computer science from Northeastern University, where he was advised by Alina Oprea and Cristina Nita-Rotaru.
- Seminar date/time: Wednesday, 26 April 2023, 1-2PM AEST (Sydney time)
Recording: recording
Slides: Xinyun_talk_adv+llm
Title: Adversarial Learning Meets Large Language Models
Abstract: Large language models have achieved impressive performance on various natural language processing tasks, and can be adapted to accomplish tasks that require multi-modal data. However, the robustness and safety of these models are still not well understood. In this talk, I will discuss my recent works on investigating different aspects of robustness issues of large language models, and connect them to the literature of adversarial machine learning. We demonstrate that many common vulnerabilities of deep neural networks before the era of foundation models still persist in large language models, such as the sensitivity to input variations negligible by humans. On the other hand, new types of attacks have been crafted specially for large language models, including prompt injection attacks.
Bio: Xinyun Chen is a senior research scientist in the Brain team of Google Research. She obtained her Ph.D. in Computer Science from University of California, Berkeley. Her research lies at the intersection of deep learning, programming languages, and security. Her research focuses on large language models, learning-based program synthesis and adversarial machine learning. She received the Facebook Fellowship in 2020, and Rising Stars in Machine Learning in 2021. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and her work AlphaCode was featured as the front cover in Science Magazine.
- Seminar date/time: Wednesday, 22 March 2023, 1-2 pm (Sydney time)
Title: “Hark: A Deep Learning System for Navigating Privacy Feedback at Scale”
Recording link: https://webcast.csiro.au/#/videos/26a850ad-b41e-40c9-aff5-e9fb62afedec
Slides: Hark IEEE S&P 2022_ Cybersecurity CRC and CSIRO SAO Seminar
Speaker: Sai Teja Peddinti, Google Research, Staff Research Scientist, psaiteja@google.com, https://sites.google.com/site/psaiteja/home
Abstract: Integrating user feedback is one of the pillars for building successful products. However, this feedback is generally collected in an unstructured free-text form, which is challenging to understand at scale. This is particularly demanding in the privacy domain due to the nuances associated with the concept and the limited existing solutions. In this work, we present Hark, a system for discovering and summarizing privacy-related feedback at scale. Hark automates the entire process of summarizing privacy feedback, starting from unstructured text and resulting in a hierarchy of high-level privacy themes and fine-grained issues within each theme, along with representative reviews for each issue. At the core of Hark is a set of new deep learning models trained on different tasks, such as privacy feedback classification, privacy issues generation, and high-level theme creation. We illustrate Hark’s efficacy on a corpus of 626M Google Play reviews. Out of this corpus, our privacy feedback classifier extracts 6M privacy-related reviews (with an AUC-ROC of 0.92). With three annotation studies, we show that Hark’s generated issues are of high accuracy and coverage and that the theme titles are of high quality. We illustrate Hark’s capabilities by presenting high-level insights from 1.3M Android apps.
Bio: Sai Teja Peddinti is a Staff Research Scientist in the Privacy Research group at Google. His current research focuses on applying machine learning techniques to build novel privacy and security features, and performing large scale measurements and analysis to understand user preferences/concerns and to evaluate effectiveness of existing features. Previously, he has interned at Alcatel Lucent Bell Labs and worked on a combined project of UC Berkeley and Microsoft Research. He completed his Ph.D. in Computer Science from NYU in 2014 and his Bachelors from DA-IICT, India in 2009. His research appeared in top conferences, won IAPP SOUPS Privacy Award 2017, and has been selected as a finalist in NYU CSAW Applied Research Competition 2022.
- Date: Wednesday, 22 Feb 2022, 1-2pm AEST (Sydney time).
Title: Flocking to Mastodon: Tracking the Great Twitter Migration
Recording link: https://webcast.csiro.au/#/videos/46dd8cbe-fbf3-403e-b7dc-04b93e9a1bf0
Speaker: Assistant Professor: Gareth Tyson, Hong Kong University of Science and Technology
http://www.eecs.qmul.ac.uk/~tysong/
Abstract: On October 27, 2022, Elon Musk acquired the world’s largest micro-blogging platform, Twitter. As a self-proclaimed “free speech absolutist”, this was a controversial and highly publicised event. The acquisition led to a series of chaotic events. As a consequence, Twitter experienced a mass migration of users. One of the recipient platforms has been Mastodon, a decentralized microblogging service. This presentation will discuss our measurements of the migration.
Bio: Gareth Tyson is an Assistant Professor at Hong Kong University of Science and Technology. He regularly publishes in venues such as SIGCOMM, SIGMETRICS, WWW, INFOCOM, CoNEXT and IMC, alongside various top-tier IEEE/ACM Transactions. Over the last 5 years, he has been awarded over £5 million in research funding and has received coverage from news outlets such as BBC, Washington Post, CNBC, New Scientist, MIT Tech Review, The Times, Slashdot, Daily Mail, Wired, Science Daily, Ars Technica, The Independent, Business Insider, The Register, as well as being interviewed on both TV and Radio. He regularly serves on numerous organising and program committee member for conferences such as ACM SIGCOMM, ACM SIGMETRICS, ACM IMC, ACM WWW, ACM CoNEXT, IEEE ICDCS and AAAI ICWSM
- Date: 9/2/23 15.00-16.00 Sydney time
Speaker: Dr Yinhao Jiang, Postdoctoral research fellow in Cyber Security at the Charles Sturt University.
Title: Statistical Aggregation with Local Differential Privacy
Recording: https://webcast.csiro.au/#/videos/c85fe082-253a-40cb-85e7-24392d47b5db
Slides:
Abstract: Collecting data from clients, or data crowd-sourcing has recently been a common practice of companies to understand the clients’ insights to improve services and products. In compliance with enacted privacy laws and regulations, companies need to protect client privacy, or user privacy, when handling user data. Local differential privacy (LDP) is an emerging privacy-preserving approach that guarantees user privacy by perturbing users’ data at their locations while maintaining the users statistics to be accurate. The Local differential privacy model overcomes the limitation of existing privacy preserving models by not requiring the data collector to be trusted in protecting user privacy. This survey aims to help practitioners to understand and make use of Local differential privacy protection in their data collection practices. We provide a structured and application-oriented review of existing Local differential privacy algorithms for aggregating user statistics. We present brief algorithmic descriptions of statistical aggregation algorithms with Local differential privacy categorized based on their computed statistics and LDP achievement approach. We also discuss the advantages and disadvantages of the algorithms and highlight potential challenges for their practical applications.
Bio: Yinhao Jiang is a Postdoctoral research fellow in Cyber Security at the Charles Sturt University. He received his Ph.D. in Cryptography from the University of Wollongong. He is currently focusing on applied cryptography regarding privacy-enhancing technologies. His research interests also include statistical technology tools for privacy evaluation.
- Seminar date/time: Thursday, 15th December, at 4pm to 5pm AEDT
Speaker: Associate Professor Giampaolo Bella, University of Catania, ITALY https://www.dmi.unict.it/giamp/
Title: Out to explore the cybersecurity planet
Recording: https://webcast.csiro.au/#/videos/9e953ee4-b2f5-4127-9438-c4e08dabf395
Slides:giamp_2022AU
Abstract: Purpose – Security ceremonies still fail despite decades of efforts by researchers and practitioners. Attacks are often a cunning amalgam of exploits for technical systems and of forms of human behaviour. For example, this is the case with the recent news headline of a large-scale attack against Electrum Bitcoin wallets, which manages to spread a malicious update of the wallet app. The author therefore sets out to look at things through a different lens.
Design/methodology/approach – The author makes the (metaphorical) hypothesis that humans arrived on Earth along with security ceremonies from a very far planet, the Cybersecurity planet. The author’s hypothesis continues, in that studying (by huge telescopes) the surface of Cybersecurity in combination with the logical projection on that surface of what happens on Earth is beneficial for us earthlings.
Findings – The author has spotted four cities so far on the remote planet. Democratic City features security ceremonies that allow humans to follow personal paths of practice and, for example, make errors or be driven by emotions. By contrast, security ceremonies in Dictatorial City compel to comply, hence humans here behave like programmed automata. Security ceremonies in Beautiful City are so beautiful that humans just love to follow them precisely. Invisible City has security ceremonies that are not perceivable, hence humans feel like they never encounter any. Incidentally, the words “democratic” and “dictatorial” are used without any political connotation.
Originality/value – A key argument the author shall develop is that all cities but Democratic City address the human factor, albeit in different ways. In the light of these findings, the author will also discuss security ceremonies of our planet, such as WhatsApp Web login and flight boarding, and explore room for improving them based upon the current understanding of Cybersecurity.
Bio: Giampaolo Bella is Associate Professor at the University of Catania, doing teaching and research in Cybersecurity and Formal Methods. After his Ph.D. from Cambridge University, he was a research associate at TU Munich, Cambridge University, and a senior researcher at SAP Research France. His recent results lie in the areas of automotive security, offensive security and socio-technical aspects of these.
- Seminar date/time: Wednesday, 23 Nov, at 11:00am to 12:00pm AEDT
Speaker: Professor Kwok-Yan LAM, Nanyang Technological University, Singapore, https://personal.ntu.edu.sg/kwokyan.lam/
Title: Digitalization, Digital Trust and TrustTech
Recording: https://webcast.csiro.au/#/videos/05b8319c-9449-4831-8f0e-74e4f8689b99
Slides: Not available
Abstract: The rapid adoption of digitalization in almost all aspects of economic activities has led to serious concerns in security, privacy, transparency and fairness issues of digitalized systems. These issues will result in negative impacts on people’s trust in digitalization, which need to be addressed in order for organizations to reap the benefits of digitalization. The typical value proposition of digitalization such as elevated operational efficiency through automation and enhanced customer services through customer analytics require the collection, storage and processing of massive amount of user data, which are typical cause for data governance issues and concerns on cybersecurity, privacy and data misuses. AI-enabled processing and decision-making also lead to concerns on algorithm bias and distrust in digitalization. In this talk, we will brief review the motivation of digitalization, discuss the trust issues in digitalization, and introduce the emerging areas of Trust Technology which is a key enabler in developing and growing the digital economy.
Bio: Professor Lam is the Associate Vice President (Strategy and Partnerships) and Professor in the School of Computer Science and Engineering at the Nanyang Technological University (NTU), Singapore. He is concurrently serving as Executive Director of the National Centre for Research in Digital Trust (DTC), Director of the Strategic Centre for Research in Privacy-Preserving Technologies and Systems (SCRIPTS), and Director of NTU’s SPIRIT Smart Nation Research Centre. From August 2020, Professor Lam is also serving as a Consultant to the INTERPOL. In 2012, he co-founded Soda Pte Ltd which won the Most Innovative Start Up Award at the RSA 2015 Conference. Prof Lam received his B.Sc. (First Class Honours) from the University of London in 1987 and his Ph.D. from the University of Cambridge in 1990. Professor Lam has been an active Cybersecurity researcher since 1980s. His research interests include Distributed and Intelligent Systems, Multivariate Analysis for Behavior Analytics, Cyber-Physical System Security, Distributed Protocols for Blockchain, Biometric Cryptography, Homeland Security, Cybersecurity and Privacy-Preserving Techniques. Prof Lam is the recipient of the 2022 Singapore Cybersecurity Hall of Fame Award.
- Seminar day and time: Friday 18/11/2022, 10:00-11:00 AEST,
Speaker: Bo Li, Assistant Professor, Department of Computer Science, University of Illinois at Urbana–Champaign
Recording: https://webcast.csiro.au/#/videos/2c3e0b8b-55b9-4841-9b37-fadffc5d8935
Slides:
Title: ‘Trustworthy Machine Learning: Robustness, Privacy, Generalization, and their Interconnections’
Abstract: Advances in machine learning have led to the rapid and widespread deployment of learning-based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same, or similar, distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shift could also lead to other trustworthiness issues such as generalization. In this talk, I will describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. I will focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives in a holistic view.
Bio: Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean’s Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times. http://boli.cs.illinois.edu/
- Seminar date and time: Thursday, 10 Nov 2022. 10-11am AEDT
Title: Mis/disinformation Panel
Recording: https://webcast.csiro.au/#/videos/48a32305-6d21-4778-8875-f69fc38d7827
Abstract: Mis/disinformation poses a significant threat to liberal democracies, including Australia. The dangers from mis/disinformation range from undermining social trust in government authorities to orchestrating election interference, resulting in a decline in the integrity of our democratic system. This interdisciplinary panel discussion explores Australia’s efforts to protect its democratic institutions and the Australian society more broadly against mis/disinformation. Panellists will address issues ranging from election interference to radicalisation, social polarisation and the sovereign citizen movement, and information warfare and the dangers mis/disinformation poses to national security. The panel will also address methods to improve Australia’s strategic interventions to mitigate the harms of mis/disinformation, including gaps and problems in AI research to combat mis/disinformation.
Speakers:
- Prof Marilyn McMahon, Deakin University, marilyn.mcmahon@deakin.edu.au
Marilyn McMahon is a Professor of Criminal Law and Deputy Dean in the Faculty of Business and Law at Deakin University, as well as a registered psychologist. Her research focuses on the intersection of criminal law and mental health issues, including deception detection.
- A/Prof Wayne Wobcke, UNSW Sydney, w.wobcke@unsw.edu.au
Wayne Wobcke is an Associate Professor in the School of Computer Science and Engineering at UNSW. His research covers a range of topics in Artificial Intelligence and he leads the research group on Artificial Intelligence for Social Good.
- A/Prof Shiri Krebs, Deakin University, Cyber Security CRC, s.krebs@deakin.edu.au
Shiri Krebs is an Associate Professor in the Faculty of Business and Law at Deakin University. She is also the Co-Lead of the Law and Policy Theme at the Cyber Security Cooperative Research Centre, the Chair of the International Lieber Society on the Law of Armed Conflict, and an affiliate scholar at the Stanford Centre on International Security and Cooperation. Her research focuses on predictive technologies in military and counterterrorism decision-making processes.
- Dr Jayson Lamchek, Deakin University, Cyber Security CRC, j.lamchek@deakin.edu.au
Jayson Lamchek is a Research Fellow at the Cyber Security Cooperative Research Centre and Deakin University. He is an interdisciplinary human rights scholar and his current research lies in the intersection of human rights and new technology, exploring legal and ethical aspects of technology development and cyber-mediated social change.
Panel Chair: A/Prof Shiri Krebs, s.krebs@deakin.edu.au
Hosts: shuo.wang@data61.csiro.au, zhi.zhang@data61.csiro.au
- Seminar Day and time: Friday, 21 October, 10:00-11:00 AEST ,
Speaker: Mengjia Yan, Assistant Professor, MIT, https://people.csail.mit.edu/mengjia/
Recording: https://webcast.csiro.au/#/videos/55953721-af21-4810-84fb-ee40d423e9db
Slides: Mengjia’s slides
Title: Software and Hardware Side-Channel Security in Modern Systems
Abstract: Modern systems are becoming increasingly complex, exposing a large attack surface with vulnerabilities in both software and hardware. Today, it is common for security researchers to explore software and hardware vulnerabilities separately, considering the two vulnerabilities in two disjoint threat models. In this talk, I will discuss the research efforts in my group on studying the security threats arising from the intersections of software and hardware layers. First, I will talk about how a hardware attack can be used to assist a software attack in bypassing a strong security defence mechanism. Specifically, I will describe the PACMAN attack, demonstrating that by leveraging speculative execution attacks, an attacker can bypass ARM Pointer Authentication to conduct a control-flow hijacking attack. Second, I will talk about an in-depth security analysis of the state-of-the-art micro-architectural side-channel attacks. We show an attack that was claimed to exploit side channels via cache contention, actually exploiting system interrupts.
Bio: Mengjia Yan is an Assistant Professor in the EECS department at MIT. She received her Ph.D. degree from the University of Illinois at Urbana-Champaign (UIUC). Her research interest lies in the areas of computer architecture and hardware security, with a focus on side-channel attacks and defences.
- Seminar day and time: Thursday, 13th Oct 2022, 14:00-15:00 AEST
Speaker: Kristen Moore, Senior Research Scientist at CSIRO’s Data61
Recording: https://webcast.csiro.au/#/videos/a1662840-df95-42a2-a760-6d7f5ed1a281
Slides: OctSAOInternal
Title: ML Enabled Cyber Deception
Abstract: Cyber Deception is increasingly valuable as a cyber security tool for breach detection, theft discovery, and threat intelligence. The key to successful deception is realistic mimicry of the digital world, so as to entice adversaries to interact with the decoy content, which springs the trap. This talk will outline how our team have leveraged generative machine learning models to automate and scale the generation of realistic (but fake) content and behaviour for use in cyber deception.
Bio: Kristen Moore is a Senior Research Scientist at CSIRO’s Data61. Her research interests are in the use of AI to augment cyber defence capability, with a focus on cyber deception and the generation of fake cyber artefacts. She was the technical lead for the Cyber Security CRC project “Deception as a Service” and is currently the technical lead for advancing AI in the Cyber Security CRC project “Augmenting Cyber Defence Capability”. She was also a finalist for the Women in AI Australia/NZ awards in Cyber Security in 2022. Kristen completed her PhD in mathematics in 2012 at the Max Planck Institute for Gravitational Physics and the Free University Berlin, in Germany. She then held postdoctoral positions at the Mathematical Sciences Research Institute at UC Berkeley, and at Stanford University. In 2014 she joined Gro Intelligence, an agriculture-tech startup company in New York, which has since grown to be named one of Time Magazine’s 100 Most Influential Companies of 2021. In 2017 she joined Telstra, where she led a team to develop and deploy a collaborative Human-AI customer support system that was used by over 1,000 Telstra customer support staff. Since joining CSIRO in 2020 she has filed an international patent application and published in top venues including IEEE Euro S&P and IEEE TPDS.
- Seminar date/time: Wednesday, 21 Sep, at 10:00am to 11:00am AEST
Speaker: Pin-Yu Chen, Principal Research Scientist, IBM Research AI; MIT-IBM Watson AI Lab https://sites.google.com/site/pinyuchenpage/home
Recording: https://webcast.csiro.au/sharevideo/1c47354d-6879-45fc-adfa-d03f65eb6cd8
Slides: Pinyu’s slides
Title: AI Model Inspector: Towards Holistic Adversarial Robustness for Deep Learning
Abstract: In this talk, I will share my research journey toward building an AI model inspector for evaluating, improving, and
exploiting adversarial robustness for deep learning. I will start by providing an overview of research topics concerning adversarial robustness and machine learning, including attacks, defenses, verification, and novel applications. For each topic, I will summarize my key research findings, such as (i) practical optimization-based attacks and their applications to explainability and scientific discovery; (ii) Plug-and-play defenses for model repairing and patching; (iii) attack-agnostic robustness assessment; and (iv) data-efficient transfer learning via model reprogramming. Finally, I will conclude my talk with my vision of preparing deep learning for the real world and the research methodology of learning with an adversary. More information about my research can be found at www.pinyuchen.com
Bio: Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems. At IBM Research, he received the honor of IBM Master Inventor and several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research works contribute to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Papper Runner-Up Award.
- Seminar date and time: Thursday, 8th Sep 2022. 3-4pm AEST Sydney time.
Speaker: Ahmed Ibrahim, Lecturer at ECU
Slides :Ahmed-Slides
Recording:https://webcast.csiro.au/#/videos/cbcfac90-7032-4de7-858d-5c3658f8aebb
Title: Improving critical infrastructure security
Abstract: Critical infrastructure security is vital to protect essential services we rely upon and if compromised, could have dire consequences on a nation’s economy, physical security, or public’s health and safety. Defending against cyber attacks from criminal and state actors quickly is challenging as incident response involves both technology and humans working effectively. Ahmed will talk about challenges specific to critical infrastructure security and ongoing work related to improving incident response capability, identity management and data sharing.
Bio: Dr Ahmed Ibrahim is a lecturer in cyber security at Edith Cowan University (ECU) and a researcher at the ECU Security Research Institute. His research is aimed at tackling cyber security problems using a multi-disciplinary focus in areas related to critical infrastructure and Internet of Things (IoT), and cyber security risks in organisations. He frequently gives talks at national and international venues. He has successfully secured external grants from the Government of Western Australia and international research partners. He has had industry engagements on various projects from federal, state, local government, and critical infrastructure providers.
- Seminar date/time: Thursday, 11 Aug, at 3:00pm to 4:00pm AEST
Speaker: Prof. Robert Deng, Singapore Management University. http://www.mysmu.edu/faculty/robertdeng/
Title: Achieving Cloud Data Security and Privacy in Zero Trust Environments
Recording:https://webcast.csiro.au/#/videos/f004a59e-927c-453c-8384-09abf40022aa
Slides:Slides – Robert Deng
Abstract: This talk will provide an overview on the design and implementation of a system for secure access control, search, and computation of encrypted data in the cloud for enterprise users. The system is designed following the “zero trust” paradigm to protect data security and privacy even if cloud storage servers or user accounts are compromised. This is achieved using end-to-end (E2E) encryption in which encryption and decryption operations only take place at client devices. However, encryption must not hinder access, search and even computation of data by authorized users. There are numerous academic publications in this area and the choice of which cryptographic techniques to use could have significant impact on the system’s scalability, efficiency and usability. We will share our experience in the design of the system architecture and selection of cryptographic techniques with a consideration to balance security, performance, and usability.
Bio: Robert Deng is AXA Chair Professor of Cybersecurity, Director of the Secure Mobile Centre, and Deputy Dean for Faculty & Research, School of Computing and Information Systems, Singapore Management University (SMU). His research interests are in the areas of data security and privacy, network security, and applied cryptography. He received the Outstanding University Researcher Award from National University of Singapore, Lee Kuan Yew Fellowship for Research Excellence from SMU, and Asia-Pacific Information Security Leadership Achievements Community Service Star from International Information Systems Security Certification Consortium. He serves/served on the editorial boards of ACM Transactions on Privacy and Security, IEEE Security & Privacy, IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Information Forensics and Security, Journal of Computer Science and Technology, and Steering Committee Chair of the ACM Asia Conference on Computer and Communications Security. He is a Fellow of IEEE and Fellow of Academy of Engineering Singapore.
- Seminar date/time: Friday, 29 July, at 10:00 am AEST (5pm on Thursday, July 28 PDT)
Speaker: Dr Herbert Lin, Stanford University, US. https://cisac.fsi.stanford.edu/people/herbert_lin
Title: Innovation as the Driver of Long-Term Cyber Insecurity
Recording: https://webcast.csiro.au/#/videos/524c5fcd-2312-4d2c-97e1-17678237c976
Slides:Herb-slides
Abstract: The appetite in modern society for increased functionality afforded by information technology is unlimited. Increased functionality of information technology necessarily entails increased complexity of design and implementation. But complexity is a fundamental driver of insecurity and unreliability in digital systems. Thus, over the long term, a boundless demand for greater functionality leads to increasingly insecure systems—which is why it is impossible to get ahead of the cybersecurity threat. Some ways to mitigate the tradeoff between innovation and security will be discussed.
Bio: Herbert Lin is senior research scholar and Hank J. Holland Fellow at Stanford University. His research interests focus on the policy-related dimensions of offensive operations in cyberspace as instruments of national policy and the security dimensions of information warfare and influence operations. He is also Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies and a member of the Science and Security Board of the Bulletin of Atomic Scientists. In 2016, he served on President Obama’s Commission on Enhancing National Cybersecurity. In 2019, he was elected a fellow of the American Association for the Advancement of Science. In 2020, he was a commissioner on the Aspen Commission on Information Disorder. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990). He received his doctorate in physics from MIT.
- Seminar date/time: Wednesday 20th July 2022. 1-2pm AEST
Speaker: Prof. Tansu Alpcan, The University of Melbourne, Australia. http://www.tansu.alpcan.org
Recording: https://webcast.csiro.au/#/videos/398b3fcb-2733-49f9-a81c-bf687a5dd5fb
Slides: Alpcan-slides
Title: Cyber-Physical System Security and Adversarial Machine Learning
Abstract: As cyber-physical systems become prevalent in safety-critical areas, such as autonomous vehicles, there is an increasing need for protecting them against malicious adversaries. Deep learning methods are expected to play an important role in detecting and countering malicious attacks. However, these powerful algorithms themselves can be targeted by advanced adversaries, which has led to the emergence of “adversarial machine learning” as a research field. This talk will present an overview of our group’s latest research results on the cyber-physical system (CPS) security and adversarial machine learning. The first part will focus on how physics-enhanced adversarial learning can help secure networked autonomous car platoons. The second part will present how coding (information) theory can improve the robustness of deep learning in general with a principled, multi-dimensional approach. The talk will conclude with a brief discussion on our ongoing game-theoretic work and future research directions.
Bio: Tansu Alpcan received a PhD degree in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign (UIUC) in 2006. His research interests include the game, optimisation, control theories, and machine learning applications to security and resource allocation problems in communications, smart grids, and the Internet of Things. He chaired or was an Associate Editor, TPC chair, or TPC member of several prestigious IEEE workshops, conferences, and journals. Tansu Alpcan is the (co-)author of more than 150 journal and conference articles as well as the book “Network Security: A Decision and Game-Theoretic Approach” published by Cambridge University Press (CUP) in 2011. He co-edited the book “Mechanisms and Games for Dynamic Spectrum Allocation” published by CUP in 2014. He has worked as a senior research scientist in Deutsche Telekom Laboratories, Berlin, Germany (2006-2009), and as Assistant Professor (Juniorprofessur) at Technical University Berlin (2009-2011). Tansu is currently with the Dept. of Electrical and Electronic Engineering at The University of Melbourne as a Professor and Reader.
- Seminar date/time: Friday 27th May 2022. 10-11am AEST
Speaker: Prof. David L. Sloss, Professor of Law at Santa Clara University, US
Title: Tyrants on Twitter: Protecting Democracies from Information Warfare.
Slides: David
Recording:https://webcast.csiro.au/#/videos/4af60f5d-c2ef-43b0-807c-a8c4231256cc
Abstract: Tyrants on Twitter explores new ways to mitigate online disinformation and to regulate content on social media platforms to improve the flow of information and strengthen democratic principles.
Sloss calls for cooperation among democratic governments to create a new transnational system for regulating social media to protect Western democracies from information warfare. Drawing on his professional experience as an arms control negotiator, he outlines a novel system of transnational governance that Western democracies can enforce by harmonizing their domestic regulations. And drawing on his academic expertise in constitutional law, he explains why that system—if implemented by legislation in the United States—would be constitutionally defensible, despite likely First Amendment objections. This book is essential reading in a time when disinformation campaigns threaten to undermine democracy.
Bio: David L. Sloss is the John A. and Elizabeth H. Sutro Professor of Law at Santa Clara University. He is the author of The Death of Treaty Supremacy: An Invisible Constitutional Change (Oxford Univ. Press, 2016) and Tyrants on Twitter: Protecting Democracies from Information Warfare (Stanford Univ. Press, forthcoming 2022). He is the co-editor of International Law in the U.S. Supreme Court: Continuity and Change (Cambridge Univ. Press, 2011) and sole editor of The Role of Domestic Courts in Treaty Enforcement: A Comparative Study (Cambridge Univ. Press, 2009). He has also published several dozen book chapters and law review articles. His book on the death of treaty supremacy and his edited volume on international law in the U.S. Supreme Court both won prestigious book awards from the American Society of International Law. Professor Sloss is a member of the American Law Institute and a Counsellor to the American Society of International Law. His scholarship is informed by extensive government experience. Before entering academia, he spent nine years in the federal government, where he worked on U.S.-Soviet arms control negotiations and nuclear proliferation issues.
- Seminar date and time: Thursday, 9th June 2022. 3-4pm AEST Sydney time
Speaker: Dr Meisam Mohammady
Title: Novel approaches to preserving utility in privacy enhancing technologies
Slides: CSCRCPPT
Recording:https://webcast.csiro.au/#/webcasts/innovationasthedriver
Abstract: Significant amount of individual information is being collected and analysed through a wide variety of applications across different industries. While pursuing better utility by discovering knowledge from the data, individuals’ privacy may be compromised during an analysis: corporate networks monitor their online behaviour, advertising companies collect and share their private information, and cybercriminals cause financial damages through security breaches. To address this issue, the data typically goes under certain anonymization techniques, e.g., Property Preserving Encryption (PPE) or Differential Privacy (DP). Unfortunately, most such techniques either are vulnerable to adversaries with prior knowledge, e.g., adversaries who fingerprint the network of a data owner, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. Therefore, the fundamental trade-off between privacy and utility (i.e., analysis accuracy) has attracted significant attention in various settings and scenarios. In line with this track of research, we aim to build utility-maximized and privacy-preserving tools for Internet communications. Such tools can be employed not only by dissidents and whistleblowers, but also by ordinary Internet users on a daily basis. To this end, we combine the development of practical systems with rigorous theoretical analysis, and incorporate techniques from various disciplines such as computer networking, cryptography, and statistical analysis. This presentation covers two different frameworks in some well-known settings. First, I will present the Multi-view approach which preserves both privacy and utility of data in network trace anonymization. Second, I will present the DPOAD (Differentially Private Outsourcing of Anomaly Detection) approach which is a framework enabling privacy preserving anomaly detection in an outsourcing setting.
Bio: Meisam is an active Research Scientist in CSIRO Data61. Meisam’s research focuses on ethical and secure machine learning (private, fair and certifiably robust to adversaries), differential privacy, privacy preserving cloud security auditing and security issues pertaining to Internet of Things (IoT). He earned his PhD from the Concordia Institute for Information Systems Engineering (CIISE) at Concordia University, his MSc from the Department of Electrical Engineering at Ecole Polytechnique Montreal, and his BS from the Department of Electrical Engineering at Sharif University of Technology. He has had several collaborations in terms of research and supervision with both academia and industry such as the Department of Computer Science at the Illinois Institute of Technology (IIT), the University of New South Wales (UNSW), the University of Sydney and Ericsson Research Canada. Meisam has co-authored several papers in top-tier security journals and conferences, and his PhD dissertation has won the Distinguished PhD Dissertation Awards in the category of Engineering and Natural Science PhD dissertations and selected as Concordia University’s nominee for both Canada-wide CAGS and ADESAQ competitions.
- Seminar date and time: 12th May 2022. 3-4pm AEST Sydney time.
Recording: https://webcast.csiro.au/#/videos/0b3094e5-66b1-4660-a4b2-5d3502db3e32
Slides: CREST_CSCRC_POKAPS_seminar-2022_2
Title: Patching and updating impact estimation
Abstract: Due to ever-changing user demands modern dynamic software systems are in constant need to be updated and tailored accordingly. At the same time, the service interruptions commonly caused by traditional software patching and updating processes may not be acceptable in critical environments. Thus, the interest towards runtime (live) patching is growing, specifically in the security context in an attempt to quickly mitigate potential vulnerabilities. This seminar outlines the existing challenges and solutions in the area of live software patching. In addition, novel current work on update-induced impact calculation technique aiding in failed update recovery is presented and discussed.
Bio: Victor Prokhorenko is a researcher with the Centre for Research on Engineering Software Technologies (CREST) at the University of Adelaide. Victor has more than 17 years of experience in software engineering with main areas of expertise including investigation of technologies related to software resilience, trust management and big data solutions hosted within OpenStack private cloud platform. Victor has obtained a PhD in Computer Science from the University of South Australia.
- Thursday 28th April 2022. 3-4pm AEST
Speaker: Assoc Prof. Olya Ohrimenko from University of Melbourne, Australia
Title: Security and Privacy for Machine Learning: Why? Where? and How?
Recording: Not available
Slides: Not available
Abstract: Machine learning on personal and sensitive data raises privacy concerns and creates potential for inadvertent information leakage. However, incorporating analysis of such data in decision making can benefit individuals and society at large (e.g., in healthcare and transportation). In order to strike a balance between these two conflicting objectives, one has to ensure that data analysis with strong privacy guarantees is deployed and securely implemented. My talk will discuss challenges and opportunities in achieving this goal. I will first describe attacks against not only machine learning algorithms but also naïve implementations of algorithms with rigorous theoretical guarantees such as differential privacy. I will then discuss approaches to mitigate these attack vectors including property-preserving data analysis and data-oblivious algorithms.
Bio: Olya Ohrimenko is an Associate Professor at The University of Melbourne that she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include data privacy, integrity and security issues that emerge in the cloud computing environment and machine learning applications. She is often involved in the organization of workshops on privacy-preserving machine learning at leading security and machine learning venues. Olya has received solo and joint research grants from Facebook and Oracle and is currently a PI on a joint MURI-AUSMURI grant. She holds a Ph.D. degree from Brown University and a B.CS. (Hons) degree from the University of Melbourne. See https://people.eng.unimelb.edu.au/oohrimenko/ for more information.
- Thursday, 7th April, 3-4PM AEDT
Title: Weak-Key Analysis for BIKE Post-Quantum Key Encapsulation Mechanism
Speaker: Dr Syed W. Shah
Recording: https://webcast.csiro.au/#/videos/19139412-7cbd-4dce-bae1-909ac73b885b
Slides:
Abstract: The evolution of quantum computers poses a serious threat to contemporary public-key encryption (PKE) schemes. To address this impending issue, the National Institute of Standards and Technology (NIST) is currently undertaking the Post-Quantum Cryptography (PQC) standardization project intending to evaluate and subsequently standardize the suitable PQC scheme(s). One such attractive approach, called Bit Flipping Key Encapsulation (BIKE), has made to the final round of the competition. Despite having some attractive features, the IND-CCA security of the BIKE depends on the average decoder failure rate (DFR), a higher value of which can facilitate a particular type of side-channel attack. Although the BIKE adopts a Black-Grey-Flip (BGF) decoder that offers a negligible DFR, the effect of weak-keys on the average DFR has not been fully investigated. Therefore, in this paper, we first perform an implementation of the BIKE scheme, and then through extensive experiments show that the weak-keys can be a potential threat to IND-CCA security of the BIKE scheme and thus need attention from the research community prior to standardization. We also propose a key-check algorithm that can potentially supplement the BIKE mechanism and prevent users from generating and adopting weak keys to address this issue.
Bio: Syed W. Shah received his Ph.D. degree in Computer Science and Engineering from the University of New South Wales (UNSW Sydney), Australia, and an M.S. degree in Electrical and Electronics Engineering from the University of Bradford, U.K. He is currently a Research Fellow at Deakin University, Australia. His research interests include pervasive/ubiquitous computing, user authentication/identification, Internet of Things, signal processing, data analytics, privacy, and security.
- Thursday, March 24th 3-4 PM AEDT, Professor Yongdae Kim, https://syssec.kaist.ac.kr/~yongdaek/
Speaker: Professor Yongdae Kim from KAIST, South Korea
Recording: https://webcast.csiro.au/#/videos/521d1743-771b-41ef-a547-faef3221cd15
Slides:Cellular Testing CSIRO
Title: (Almost) Automatic Testing of Cellular Security
Abstract: The number of mobile devices communicating through cellular networks is expected to reach 17.72 billion by 2024. Despite this, 3GPP standards only provide positive testing specifications (through conformance test suites) that mostly check if valid messages are correctly handled. This talk summarizes our dynamic and static approach to test the security of both cellular modems and networks automatically. I first introduce LTEFuzz (S&P’19), the first systematic framework to dynamically test if cellular modems and networks can correctly handle packets that should be dropped according to the standard. Dynamic analysis is then extended with DoLTEst (Usenix Sec’22), which is a downlink fuzzer for cellular baseband. I then introduce BaseSpec (NDSS’21), which performs a comparative static analysis of baseband binary and cellular specification. I will conclude my talk with future directions for automatic testing.
Bio: Yongdae Kim is a Professor in the Department of Electrical Engineering, and the Graduate School of Information Security at KAIST. He received a PhD degree from the computer science department at the University of Southern California under the guidance of Gene Tsudik in 2002. Before joining KAIST in 2012, he was a professor in the Department of Computer Science and Engineering at the University of Minnesota – Twin Cities for 10 years. He served as a KAIST Chair Professor between 2013 and 2016, and a director of Cyber Security Research Center between 2018 and 2020. He is a program committee chair for ACM WISEC 2022, was a general chair for ACM CCS 2021, and served as an associate editor for ACM TOPS, and a steering committee member of NDSS. His main research interests include novel attacks for emerging technologies, such as drone/self-driving cars, cellular networks and Blockchain.
- Time: Thursday March 10th 3-4pm Sydney time AEDT
Speaker: Dr. Mir Ali Rezazadeh Baee mirali.rezazadeh@qut.edu.au
Slides: CSCRC_DATA61_2022_Theme1.1
Recording:https://webcast.csiro.au/#/videos/a324f4dd-5676-437a-a4d4-f56db69334b7
Title: Anomaly Detection in Key-Management Activities Using Metadata: Case Study and Framework
Abstract: Over the last ten years, the use of cryptography to protect enterprise data has grown, with an associated increase in Enterprise Key-Management System (EKMS) deployment. Such systems are described in the existing literature, including standards (See NIST SP800-57, OASIS KMIP). Metadata analysis techniques have been widely applied in network security to build profiles of normal and anomalous (possibly malicious) behaviour to assist in intrusion detection. However, this approach had not previously been applied to EKMS metadata. Additionally, enterprise encryption tools have been used by attackers to evade detection when performing data exfiltration. This CSCRC research project investigated the use of EKMS metadata as a basis for detection of anomalous behaviour in enterprise networks. We produced datasets containing EKMS metadata, identified relevant metadata elements and developed a framework for anomaly detection based on EKMS metadata analysis. We explored the effectiveness of this approach using a simulated enterprise environment with EKMS deployed. Results show that our framework can accurately detect all anomalous enterprise network activities.
Bio: Dr. Mir Ali Rezazadeh Baee is a Postdoctoral Researcher in the Cyber Security CRC. Ali has a Ph.D. from Queensland University of Technology (QUT), Brisbane, QLD, Australia. He has a strong focus on applied cryptography and information security, with his doctoral thesis examining authentication and key-management protocols for securing safety critical vehicular communications in a privacy-preserving manner. Ali is a member of the International Association for Cryptologic Research (IACR) and Senior Member of the Institute of Electrical and Electronics Engineers (IEEE), associated with societies including: Computer, Vehicular Technology, Intelligent Transportation Systems and Signal Processing. He has actively served as a reviewer for flagship journals such as IEEE Transactions on Vehicular Technology, IEEE Transactions on Dependable and Secure Computing, and conferences including the IACR’s EUROCRYPT and ASIACRYPT.
- Time: Thursday March 10th 3-4pm Sydney time AEDT
Date/time: February 10th 3-4pm Sydney time AEDT
Speaker: Dr Yinhao Jiang
Title: Privacy Concerns Raised by Pervasive User Data Collection From Cyberspace and Their Countermeasures
Recording https://webcast.csiro.au/#/videos/28d64065-f1e5-46a7-b4ce-56a91ca29bec
Slides
Abstract: The virtual dimension called `Cyberspace’ built on internet technologies has served people’s daily lives for decades. Now it offers advanced services and connected experiences with the developing pervasive computing technologies that digitise, collect, and analyse users’ activity data. This changes how user information gets collected and impacts user privacy at traditional cyberspace gateways, including the devices carried by users for daily use. This work investigates the impacts and surveys privacy concerns caused by this data collection, namely identity tracking from browsing activities, user input data disclosure, data accessibility in mobile devices, security of delicate data transmission, privacy in participating sensing, and identity privacy in opportunistic networks. Each of the surveyed privacy concerns is discussed in a well-defined scope according to the impacts mentioned above. Existing countermeasures are also surveyed and discussed, which identifies corresponding research gaps. To complete the perspectives, three complex open problems, namely trajectory privacy, privacy in smart metering, and involuntary privacy leakage with ambient intelligence, are briefly discussed for future research directions before a succinct conclusion to our survey at the end.
Bio: Yinhao Jiang is a Postdoctoral Research Fellow in Cyber Security CRC at the Charles Sturt University. He received the PhD degree on the functional encryption from the University of Wollongong, in 2018. He is currently focusing on functional encryption for privacy-enhancing technologies. His research interests also include IoT anonymity and privacy quantification. Please contact him at yjiang@csu.edu.au.
To register to our mailing list please send an email to sao@csiro.au
For more information contact Co-leaders Shuo Wang (External Speakers) and Sharif Abuadbba (Internal CSCRC Research)
Past Events
![]() ![]() ![]() |