Privacy Preserving Machine Learning

CCS 2019 Workshop
London, November 15

Scope

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:

  • secure multi-party computation techniques for ML
  • homomorphic encryption techniques for ML
  • hardware-based approaches to privacy preserving ML
  • centralized and decentralized protocols for learning on encrypted data
  • differential privacy: theory, applications, and implementations
  • statistical notions of privacy including relaxations of differential privacy
  • empirical and theoretical comparisons between different notions of privacy
  • trade-offs between privacy and utility

We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

Call For Papers & Important Dates

Download Full CFP


Submission deadline: July 1, 2019 (11:59pm AoE) [Extended]
Notification of acceptance: August 7, 2019
CCS early registration deadline: October 1, 2019 (11:59PM BST)
Workshop: November 15, 2019

Submission Instructions

Submissions in the form of extended abstracts must be at most 4 pages long (not including references), using the CCS format. We do accept submissions of work recently published or currently under review. Submissions should be anonymized. The workshop will not have formal proceedings, but authors of accepted abstracts can choose to have a link to arxiv or a pdf published on the workshop webpage.

Submit Your Abstract

Travel Grants

Thanks to our generous sponsors, we are able to provide a limited number of travel grants of up to $800 to help partially cover the expenses of PPML attendees who have not received other travel support from CCS this year. To apply, please send an email to ppml19@easychair.org with the subject “PPML19 Travel Grant Application” including a half-page statement of purpose and a summary of anticipated travel expenses. If you are an undergraduate or graduate student, we ask for a half-page recommendation letter supporting your application to be sent by the deadline to the same email address. The deadline for applications is Sep 23, 2019 (11:59pm AoE). The notifications will be sent by Sep 30. Please feel free to send us an email if you have any questions.

Invited Speakers

Accepted Papers

Marshall Ball, Brent Carmer, Tal Malkin, Mike Rosulek and Nichole Schimanski.
Garbled Neural Networks are Practical   
Frank Blom, Niek J. Bouman, Berry Schoenmakers and Niels de Vreede.
Efficient Secure Ridge Regression from Randomized Gaussian Elimination   
Nitin Agrawal, Ali Shahin Shamsabadi, Matthew Kusner and Adrià Gascón.
QUOTIENT: Secure Two-Party Neural Network Training and Prediction via Oblivious Transfer   
Yuantian Miao, Ben Zi Hao Zhao, Minhui Xue, Chen Chao, Lei Pan, Jun Zhang, Dali Kaafar and Yang Xiang.
The Audio Auditor: Participant-Level Membership Inference in Internet of Things Voice Services   
Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Raghav Bhaskar and Mohamed Ali Kaafar.
On Inferring Training Data Attributes in Machine Learning Models   
Harsh Chaudhari, Ashish Choudhury, Arpita Patra and Ajith Suresh.
ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction   
Brendan Avent, Javier Gonzalez, Tom Diethe, Andrei Paleyes and Borja Balle.
Automatic Discovery of Privacy-Utility Pareto Fronts   
Marco Romanelli, Konstantinos Chatzikokolakis and Catuscia Palamidessi.
Optimal Obfuscation Mechanisms via Machine Learning and Applications to Location Privacy   
Sahar Mazloom, Le Phi Hung, Samuel Ranellucci and S. Dov Gordon.
Secure parallel computation on national scale volumes of data   
James Bell, Aurélien Bellet, Adria Gascon and Tejas Kulkarni.
Private Protocols for $U$-statistics in the Local Model and Beyond   
Mark Abspoel, Niek J. Bouman, Berry Schoenmakers and Niels de Vreede.
Fast Secure Comparison for Medium-Sized Integers and Its Application in Binarized Neural Networks   
Devin Reich, Ariel Todoki, Rafael Dowsley, Martine De Cock and Anderson Nascimento.
Secret Sharing based Private Text Classification   
Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaafar and Haojin Zhu.
Differentially Private Data Sharing: Sharing Models versus Sharing Data   
Antti Koskela, Joonas Jälkö and Antti Honkela.
Computing Exact Guarantees for Differential Privacy   
Prasad Naldurg and Karthikeyan Bhargavan.
A verification framework for secure machine learning   
Thijs Veugen, Bart Kamphorst, Marie Beth van Egmond and Natasja van de L'Isle.
Privacy-Preserving Coupling of Vertically-Partitioned Databases and Subsequent Training with Gradient Descent   
Sebastian P. Bayerl, Ferdinand Brasser, Christoph Busch, Tommaso Frassetto, Patrick Jauernig, Jascha Kolberg, Andreas Nautsch, Korbinian Riedhammer, Ahmad-Reza Sadeghi, Thomas Schneider, Emmanuel Stapf, Amos Treiber and Christian Weinert.
Privacy-Preserving Speech Processing via STPC and TEEs   
Ranya Aloufi, Hamed Haddadi and David Boyle.
Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants   
Lukas Burkhalter, Alexander Viand, Anwar Hithnawi and Hossein Shafagh.
Robust Secure Aggregation for Privacy-Preserving Federated Learning with Adversaries   
Anders Dalskov, Daniel Escudero and Marcel Keller.
Secure Evaluation of Quantized Neural Networks   
Mohammad Yaghini, Bogdan Kulynych and Carmela Troncoso.
Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning   

Organization


Workshop organizers

  • Borja Balle
  • Adrià Gascón (Alan Turing Institute & Warwick)
  • Olya Ohrimenko (Microsoft Research)
  • Mariana Raykova (Google)
  • Phillipp Schoppmmann (HU Berlin)
  • Carmela Troncoso (EPFL)

Program Committee

  • Pauline Anthonysamy (Google)
  • Brendan Avent (USC)
  • Carsten Baum (BIU)
  • Aurélien Bellet (Inria)
  • Elette Boyle (IDC Herzliya)
  • Kamalika Chaudhuri (UCSD)
  • Giovanni Cherubin (EPFL)
  • Graham Cormode (University of Warwick)
  • Morten Dahl (Dropout Labs)
  • Christos Dimitrakakis (Chalmers University of Technology)
  • Jack Doerner (Northeastern)
  • Jamie Hayes (UCL)
  • Dali Kaafar (Macquarie University and Data61-CSIRO)
  • Peter Kairouz (Google)
  • Shiva Kasiviswanathan (Amazon)
  • Marcel Keller (Data61)
  • Niki Kilbertus (Cambridge University)
  • Ágnes Kiss (TU Darmstadt)
  • Nadin Kokciyan (King's College London)
  • Boris Köpf (Microsoft Research)
  • Aleksandra Korolova (USC)
  • Eleftheria Makri (KU Leuven)
  • Sebastian Meiser (Visa)
  • Luca Melis (Amazon)
  • Kartik Nayak (Duke University)
  • Catuscia Palamidessi (École Polytechnique & INRIA)
  • Peter Rindal (Visa Research)
  • Benjamin Rubinstein (University of Melbourne)
  • Anand Sarwate (Rutgers University)
  • Thomas Schneider (TU Darmstadt)
  • Peter Scholl (Aarhus University)
  • Or Sheffet (University of Alberta)
  • Nigel Smart (KU Leuven)
  • Adam Smith (Boston University)
  • Florian Tramer (Stanford)
  • Muthuramakrishnan Venkitasubramaniam (Rochester)
  • Xiao Wang (Northwestern University)
  • Kevin Yeo (Google)
  • Pinar Yolum (Utrecht University)
  • Yang Zhang (CISPA Helmholtz Center)

Sponsors


Previous Editions