PRADA: Privacy Risk Assessment and Defense Apparatus

The Problems

Organisations and businesses are increasingly collecting customer’s personal information to improve their services and direct marketing. But a recent survey shows that only 40% of Australian businesses were aware of their legal obligations and had some security measures in place to protect personal data. More recently, the Privacy Amendment (Re-identification Offence) Bill 2016 demands that privacy risks must be actively assessed and mitigated. Despite such growing demands of risk assessment, there is no acceptable tool or methodology that can inform data custodians of the level of privacy risk if their collected data is released or shared.

What is PRADA?

PRADA stands for Privacy Risk Assessment and Defense Apparatus.

PRADA is a research and development project at the Networks Group of Data61. Its main goal is to build a production quality privacy management dashboard to oversee data disclosure. The user will be able to analyse and modify their datasets using some built-in privacy risk assessment tools and privacy-preserving algorithms. By examining the outcome of the privacy risk metric, the user will be able to find a reasonable balance between privacy and utility of their dataset, and understand the potential privacy implications of data release/sharing.

PRADA Architecture

PRADA consists of a web-based risk assessment dashboard, and a cloud-based analytics and modelling backend. From the dashboard, the user will be able to select their desired risk assessment and/or privacy-preserving technique(s) to be applied to the dataset. This request will then be submitted via the REST API for backend processing. Due to the potential large dataset sizes, the actual analysis and manipulation of data can be very time-consuming but will be performed in the cloud, taking advantage of the highly scalable and cost efficient nature of cloud computing. The output will be a risk reduced (possibly synthetic) dataset and associated with this dataset, a set of quantitative risks evaluated by the privacy risk metric.

 

 

Related Publications

  1. I. Muhammad, S. Shehroz, E. DeCristofaro, M.A. Kaafar, G. Jourjon, Z. Shafiq., “Measuring, Characterising and Detecting Facebook Like Farms”, Accepted in ACM Transactions on Privacy and Security (TOPS) 2017.
  2. A. Friedman, S. Berkovsky, M. Kaafar, “A Differential Privacy Framework for Matrix Factorization Recommender Systems”, Published in User Modeling and User-Adapted Interaction: The Journal of Personalization Research (UMUAI) 2016.
  3. Chaabane A, Cristofaro ED, Kaafar MA, Uzun E (July 2013) Privacy in Content-Oriented Networking: Threats and Countermeasures. SIGCOMM Computer Communication Review CCR 43(3): 25-33.
  4. Anggono I, Haddadi H, M.A. Kaafar, “Preserving Privacy in Geo-Targeted Advertising”, In ACM WSDM TargetAd Workshop, San Francisco, 2016.
  5. Chen T, Borelli R, Kaafar MA, Friedman A (July 2014) On the Effectiveness of Obfuscation Techniques in Online Social Networks. In Privacy Enhancing Technologies, (PETS) 2014.
  6. . T.Chen, A. Chaabane, P-U. tournoux, M.A. Kaafar, R. Boreli “How much is too much: Leveraging Ads Audience Estimation to Evaluate Public Profile Uniqueness”, In PETS 2013.