In an ever-growing digitised world, data has become a critical commodities across various domains. Further to this, Machine Learning (ML) models have emerged as the dominant tools to transform that data into actionable insights and further knowledge, underpinning the rise of Artificial Intelligent (AI) systems.
While this combination has produced numerous environmental, social, and economic benefits, there has also been increasing instances where data and ML systems built on data were kept and used in controversial or irresponsible ways. In turn, this has lead to negative outcomes such as privacy breaches for thousands of individuals, discrimination towards groups of people in sectors such as health or justice, or sensitive information leakages.
Our group focuses on the topic of Privacy and Confidentiality in the context of Responsible and Safe development and adoption of ML/AI models, and their underlying use of Data. Our vision is to promote the adoption of privacy-by-design, especially in ML/AI systems for Australia’s data driven future. Our research is two-fold:
- Understand. First, we seek to identify, qualify, and quantify privacy risks in data and ML/AI systems.
- Act. Second, we develop novel, efficient and practical Privacy-Preserving ML systems, where ML models and algorithms are either enhanced in novel ways to lower their privacy-related vulnerabilities, or used to mitigate existing data-related privacy issues.
To achieve these research vision, our vibrant team collaborates with world-class peers in multiple domains to conduct research on real-world challenges, build, and deploy practical solutions, and provide trusted advice to industries and government agencies.