Call for papers: Privacy Preserving Machine Learning - PriML and PPML Joint Edition -- NeurIPS 2020 Workshop Virtual workshop: December 11 or 12, 2020 Website: https://ppml-workshop.github.io/ # Description This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the Covid-19 pandemic, we invite submissions for the special track on this topic. * Special track: privacy of ML and data analytics in pandemic (e.g., secure contact tracing) * Differential privacy and other statistical notions of privacy: theory, applications, and implementations * Secure multi-party computation techniques for ML * Learning on encrypted data * Hardware-based approaches to privacy-preserving ML * Trade-offs between privacy and utility * Privacy attacks * Federated and decentralized privacy-preserving algorithms * Programming languages for privacy-preserving data analysis * Empirical and theoretical comparisons between different notions of privacy * Policy-making aspects of data privacy * Privacy in autonomous systems * Online social networks privacy * Interplay between privacy and adversarial robustness in machine learning * Relations between privacy, fairness and transparency * Applications of privacy-preserving ML # Submission instructions Submissions in the form of extended abstracts must be at most 4 pages long (not including references; additional supplementary material may be submitted but may be ignored by reviewers), non-anonymized and adhere to the NeurIPS format. We encourage submission of work that is new to the privacy-preserving machine learning community. Submissions solely based on work that has been previously published in conferences on machine learning and related fields are not suitable for the workshop. On the other hand, we allow submission of works currently under submission and relevant works recently published in privacy and security venues. The workshop will not have formal proceedings, but authors of accepted abstracts can choose to have a link to arxiv or a pdf added on the workshop webpage. - Submission url: https://easychair.org/my/conference?conf=ppml2020 - Submission deadline: Oct 02, 2020 - Notification of acceptance: Oct 23, 2020 # Organizers Borja Balle (DeepMind) James Bell (The Alan Turing Institute) Aurélien Bellet (Inria) Kamalika Chaudhuri (University of California, San Diego) Adria Gascon (Google) Antti Honkela (University of Helsinki) Antti Koskela (University of Helsinki) Casey Meehan (University of California, San Diego) Olya Ohrimenko (University of Melbourne) Mijung Park (MPI Tuebingen) Mariana Raykova (Google) Mary Anne Smart (University of California, San Diego) Yu-Xiang Wang (University of California, Santa Barbara) Adrian Weller (Alan Turing Institute & Cambridge)