About Us

The Privacy Technology (PT) Group is part of CSIRO’s Data61 and operates within the Software and Computational Systems (SCS) program. We are a key contributor to the Digital Trust research theme, developing technologies that help ensure digital systems are not only secure and private, but also fair and trustworthy.

Our mission is to advance privacy-enhancing technologies and build AI and data systems that uphold the principles of privacy, confidentiality, equity, and trust. As digital technologies become more embedded in everyday life, these values are essential to ensure safe and responsible innovation that benefits all Australians.

At the PT Group, we:

  • Assess and evaluate risks of information leakage and unintended biases in data and AI systems.
  • Design technical solutions that safeguard sensitive information while maintaining data/AI utility.
  • Address fairness and equity concerns in how data and AI technologies are developed and deployed.
  • Develop privacy-focused, human-aligned AI systems and deliver practical tools and guidance for government and industry.

Through active collaboration with experts across disciplines, sectors, and jurisdictions, we aim to shape a digital future that is privacy-preserving, inclusive, and worthy of public trust.

Our Vision

We envision a digital future where data and AI systems are seamlessly integrated in ways that are privacy-preserving, trustworthy, and fair—by design.

In our view, privacy risks and equity challenges arise not only from data or AI in isolation, but from their interaction—especially through activities like training, fine-tuning, customisation, and real-time interaction with AI models. Our group addresses these challenges holistically by working across both sides of this dynamic interface:

  • Private Data Foundations: We design safeguards that enable responsible data use.
  • Private and Confidential AI/ML Systems: We build models and algorithms that maintain integrity and trust across the AI lifecycle.

By bridging the gap between sensitive data and adaptive AI systems, our research ensures that the end-to-end pipeline—from data collection to AI deployment—is privacy-preserving, inclusive, and trustworthy.

Our Research and Business Areas

The PT Group brings together two complementary research teams, working across the full spectrum of privacy technology:

  • Data Privacy Team (led by Dr Paul Tyler):
  • Specialises in privacy risk assessment, de-identification, privacy-preserving data analytics, and privacy governance frameworks to support safe and productive data use. Core areas include trusted data provenance, data anonymisation and synthesis, privacy-aware data sharing and linkage, differential privacy, and the practical implementation of regulatory compliance —helping organisations unlock data-driven insights while boosting productivity and maintaining public trust.
  • Private & Confidential AI Team (led by Dr David Smith):
  • Focuses on developing AI systems that safeguard privacy and uphold confidentiality throughout their lifecycle, enabling the trustworthy and effective use of AI technologies. Core research areas include privacy-preserving machine learning, federated learning, sensitive-information-agnostic AI (fairness-aware AI), AI-explainability-based machine unlearning, and privacy-focused AI model testing and risk assessment—advancing responsible, scalable AI solutions that drive efficiency and unlock value while protecting sensitive information in AI systems.

Together, these teams drive innovation in privacy-first digital design and data stewardship.

Leadership

Group Leader, Principle Research Scientist

Team Leader of the Data Privacy Team, Principle Research Projects Officer

Team Leader of the Private & Confidential AI Team, Principle Research Scientist

Research Highlights

News & Events

  • May 2025: Our joint research proposal with CISPA Germany on “Exploring the Interplay Between Fairness and Privacy Using Quantitative Information Flow” was awarded funding by the German Research Association, with the Data61 team (Ming Ding and the others) contributing to workshops and publications.
  • May 2025: Two papers from our group were accepted to the Privacy Enhancing Technologies Symposium 2025 (PETS’25), a leading venue in privacy research and technology.
    • “Do It to Know It: Reshaping the Privacy Mindset of Computer Science Undergraduates”
    • “SoK: Private Knowledge Sharing in Distributed Learning”
  • May 2025: Thierry Rakotoarivelo gave a presentation on sensitive data controls and specific research in Privacy-enhanced Geo-referenced data to the Atlas of Living Australia (ALA) monthly team meeting.
  • April 2025: Two papers from our group were accepted to the ACM ASIA Conference on Computer and Communications Security 2025 (ASIACCS’25), a leading venue in cybersecurity.
    • “SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation”
    • “POSTER: When Models Speak Too Much: Privacy Leakage on Large Language Models”
  • March 2025: Ming Ding presented at the 36th Australian Sensitive Data Interest Group (AUSDIG) Meeting, discussing the importance of balancing data privacy with other safety objectives in AI applications.
  • March 2025: Ming Ding gave an invited talk on differential privacy in personalised federated learning at the IEEE Signal Processing Society (SPS) Webinar.
  • March 2025: One paper from our group was accepted to the ACM Web Conference 2025 (WWW’25), a leading venue in the intersection of AI, data, and systems.
    • “Beyond Single Tabs: A Transformative Few-Shot Approach to Multi-Tab Website Fingerprinting Attacks”
  • March 2025: One paper from our group was accepted to the ACM Conference on Computer and Communications Security 2025 (CCS’25), a leading venue in cybersecurity.
    • “Split Unlearning”
  • March 2025: One paper from our group was accepted to the IEEE International Conference on Data Engineering 2025 (ICDE’25), a leading venue in data engineering.
    • “Truss Decomposition under Edge Local Differential Privacy”
  • February 2025: The collaboration between UTS (A/Prof Bo Liu) and the Privacy Technology Group (led by Ming Ding) has secured the ARC Discovery Project Grant DP250100463 for the project titled “The Paradox of Generative Data: Ensuring Security and Privacy.”
  • February 2025: One paper from our group was accepted to the Late Breaking Work session at the ACM Conference on Human Factors in Computing Systems (CHI 2025).
    • “Privacy Meets Explainability: Managing Confidential Data and Transparency Policies in LLM-Empowered Science”