About Us

The Privacy Technology (PT) Group is part of CSIRO’s Data61 and operates within the Software and Computational Systems (SCS) program. We are a key contributor to the Digital Trust research theme, developing technologies that help ensure digital systems are not only secure and private, but also fair and trustworthy.

Our mission is to advance privacy-enhancing technologies and build AI and data systems that uphold the principles of privacy, confidentiality, equity, and trust. As digital technologies become more embedded in everyday life, these values are essential to ensure safe and responsible innovation that benefits all Australians.

At the PT Group, we:

  • Assess and evaluate risks of information leakage and unintended biases in data and AI systems.
  • Design technical solutions that safeguard sensitive information while maintaining data/AI utility.
  • Address fairness and equity concerns in how data and AI technologies are developed and deployed.
  • Develop privacy-focused, human-aligned AI systems and deliver practical tools and guidance for government and industry.

Through active collaboration with experts across disciplines, sectors, and jurisdictions, we aim to shape a digital future that is privacy-preserving, inclusive, and worthy of public trust.

Our Vision

We envision a digital future where data and AI systems are seamlessly integrated in ways that are privacy-preserving, trustworthy, and fair—by design.

In our view, privacy risks and equity challenges arise not only from data or AI in isolation, but from their interaction—especially through activities like training, fine-tuning, customisation, and real-time interaction with AI models. Our group addresses these challenges holistically by working across both sides of this dynamic interface:

  • Private Data Foundations: We design safeguards that enable responsible data use.
  • Private and Confidential AI/ML Systems: We build models and algorithms that maintain integrity and trust across the AI lifecycle.

By bridging the gap between sensitive data and adaptive AI systems, our research ensures that the end-to-end pipeline—from data collection to AI deployment—is privacy-preserving, inclusive, and trustworthy.

Our Research and Business Areas

The PT Group brings together two complementary research teams, working across the full spectrum of privacy technology:

  • Data Privacy Team (led by Dr Paul Tyler):
  • Specialises in privacy risk assessment, de-identification, privacy-preserving data analytics, and privacy governance frameworks to support safe and productive data use. Core areas include trusted data provenance, data anonymisation and synthesis, privacy-aware data sharing and linkage, differential privacy, and the practical implementation of regulatory compliance —helping organisations unlock data-driven insights while boosting productivity and maintaining public trust.
  • Private & Confidential AI Team (led by Dr David Smith):
  • Focuses on developing AI systems that safeguard privacy and uphold confidentiality throughout their lifecycle, enabling the trustworthy and effective use of AI technologies. Core research areas include privacy-preserving machine learning, federated learning, sensitive-information-agnostic AI (fairness-aware AI), AI-explainability-based machine unlearning, and privacy-focused AI model testing and risk assessment—advancing responsible, scalable AI solutions that drive efficiency and unlock value while protecting sensitive information in AI systems.

Together, these teams drive innovation in privacy-first digital design and data stewardship.

Leadership

Group Leader, Principal Research Scientist

Group-level Supervisor, Principal Research Scientist

Team Leader of the Data Privacy Team, Principal Research Projects Officer

Team Leader of the Private & Confidential AI Team, Principal Research Scientist

Research Highlights

News & Events

  • Sep 2025: Our ON Prime team, which includes Youyang Qu, Ming Ding, and researchers from UTS and Griffith University, met online with Lee Hickin, Executive Director of the National AI Centre (NAIC), to discuss approaches and tools for enhancing AI safety in Australian industry.
  • Sep 2025: Our ON Prime team, which includes Youyang Qu, Ming Ding, and researchers from UTS and Griffith University, met online with Angela Shi, the CEO of Empathetic-AI, to discuss approaches to responsible and trustworthy AI testing and development.
  • Sep 2025: Ming Ding attended the fourth bimonthly AI Working Group meeting at the Australian Research Council (ARC) and gave a presentation on our GovAI use cases along with several demos.
  • Sep 2025: Mengmeng Yang delivered an online Guest Lecture on “Privacy vs. Fairness in Machine Learning: Friends or Foes?” for the ICT30016 Innovative Project Course at the Swinburne University of Technology.
  • Sep 2025: Ming Ding delivered a presentation on Federated Learning for Large AI Models at the Australasian College of Physical Scientists & Engineers in Medicine (ACPSEM) AI Seminar Workshop hosted by the Medical School at UNSW.
  • Aug 2025: Ming Ding delivered an online guest lecture for the Swinburne University of Technology course 30016 Project Innovation, focusing on privacy and confidentiality preservation in AI.
  • Aug 2025: Ming Ding attended the AsiaCCS’25 conference in Hanoi, Vietnam, where he chaired a workshop on Privacy in LLMs and presented three research papers.
  • Aug 2025: Thierry Rakotoarivelo co-authored inputs into CSIRO’s response to Government consultations on the Productivity Commission Report, and the Cybersecurity Strategy Horizon 2 report.
  • Aug 2025: Ming Ding gave a talk titled “Protecting Privacy in AI: Risks, Regulations, and Responsible Innovation” at the School of Systems and Computing in UNSW Canberra at the Australian Defence Force Academy.
  • July 2025: Ming Ding gave a talk titled “AI and Privacy: What SMEs Need to Know” as part of CSIRO’s SME Connections Workshop series—Innovate to Grow: Digitech and AI.
  • July 2025: We attended the AI Government Showcase Event in Canberra to share our work on privacy-preserving and safe AI, and to connect with many government agencies such as DSS, ATO, AFF, GovAI, DTA, etc.
  • June 2025: We attended DISR’s debriefing session on the updated guidance document for watermarking and labelling. We also met with Lee Hickin, the new Executive Director at DISR NAIC, to discuss the next steps for the Australian Voluntary AI Safety Standard.
  • June 2025: Youyang Qu and Ming Ding submitted two use cases of AI applications to the GovAI Team at the Department of Finance to be included in their closed beta trial.
  • May 2025: Our joint research proposal with CISPA Germany on “Exploring the Interplay Between Fairness and Privacy Using Quantitative Information Flow” was awarded funding by the German Research Association, with the Data61 team (Ming Ding and the others) contributing to workshops and publications.
  • May 2025: Our joint research proposal with CISPA Germany on “Exploring the Interplay Between Fairness and Privacy Using Quantitative Information Flow” was awarded funding by the German Research Association, with the Data61 team (Ming Ding and the others) contributing to workshops and publications.
  • May 2025: Two papers from our group were accepted to the Privacy Enhancing Technologies Symposium 2025 (PETS’25), a leading venue in privacy research and technology.
    • “Do It to Know It: Reshaping the Privacy Mindset of Computer Science Undergraduates”
    • “SoK: Private Knowledge Sharing in Distributed Learning”
  • May 2025: Thierry Rakotoarivelo gave a presentation on sensitive data controls and specific research in Privacy-enhanced Geo-referenced data to the Atlas of Living Australia (ALA) monthly team meeting.
  • April 2025: Two papers from our group were accepted to the ACM ASIA Conference on Computer and Communications Security 2025 (ASIACCS’25), a leading venue in cybersecurity.
    • “SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation”
    • “POSTER: When Models Speak Too Much: Privacy Leakage on Large Language Models”