In an ever-growing digitised world, data has become one of the most valuable commodities across various domains. Further to this, Machine Learning (ML) models have emerged as the dominant tools to transform that data into actionable insights and further knowledge, giving rise to Artificial Intelligent (AI) systems.
While this combination has produced numerous environmental, social, and economic benefits, there has also been increasing instances where data and ML systems built on data were kept and used in controversial or irresponsible ways. In turn, this has lead to negative outcomes such as privacy breaches for thousands of individuals, discrimination towards groups of people in sectors such as health or justice, or sensitive information leakages.
Our group focuses on the topic of Privacy and Confidentiality in the context of Responsible and Trusted use of Data and the building and use of ML/AI models. Our vision is to promote the adoption of privacy-by-design, especially in ML/AI systems for Australia’s data driven future. Our research is two-fold:
- Understand. First, we seek to identify, qualify, and quantify privacy risks in data and ML/AI systems.
- Act. Second, we develop novel, efficient and practical Privacy-Preserving ML systems, where ML models and algorithms are either used to mitigate existing data privacy issues or enhanced in novel ways to lower their own privacy-related vulnerabilities.
To achieve these research vision, our vibrant team of researchers collaborates with world-class peer experts in multiple domains to work on real-world challenges, build, and deploy practical solutions, and provide trusted advice to industries and government agencies.