Program

NeurIPS-20 Workshop on SpicyFL

Schedule: Saturday, December 12, 2020

Slack Page: https://join.slack.com/t/neurips-spicyfl2020/shared_invite/zt-jyrmsl4k-TV2svVv7KyXAyTlJmIxG5Q

Recorded Video: https://nips.cc/virtual/2020/protected/workshop_16123.html

 

Program Schedule

All events are in Pacific Standard Time (PST),UTC -8

 

Time Session Session Chair
08:20AM – 08:30AM Opening Remarks General/Program Chairs
08:30AM – 09:00AM Keynote Talk 1: Dawn Song Towards a Principled and Practical Approach for Data Valuation in Federated Learning [Bio] [Abstract] [Video]

Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in AI and deep learning, security and privacy. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, and Best Paper Awards from top conferences in Computer Security and Deep Learning. She is an ACM Fellow and an IEEE Fellow. She is ranked the most cited scholar in computer security (AMiner Award). She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was a faculty at Carnegie Mellon University from 2002 to 2007. She is also a serial entrepreneur and has been named on the Female Founder 100 List by Inc. and Wired25 List of Innovators.

Data is valuable and a key driver of modern economy. However, how should data be valued? In this talk, we will present some recent work on data valuation. We will start by introducing a principled notion for data valuation and then present a suite of algorithms that we developed to efficiently compute the data value. We will then discuss how to adapt this notion to value data in federated learning settings. Finally, we discuss the implications of the proposed data value notion to enhance system robustness, security, and efficiency.

Andy Li
09:00AM – 10:00AM
Contributed Talk Session 1
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning, Samuel Horváth and Peter Richtárik [Paper] [Poster] [Video]
Backdoor Attacks on Federated Meta-Learning, Chien-Lun Chen, Leana Golubchik and Marco Paolieri [Paper] [Poster] [Video]
FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning, Hong-You Chen and Wei-Lun Chao [Paper] [Poster] [Video]
Preventing Backdoors in Federated Learningby Adjusting Server-side Learning Rate, Mustafa Ozdayi, Murat Kantarcioglu and Yulia Gel [Paper] [Poster] [Video]
Lingfei Wu
10:00AM – 10:30AM Keynote Talk 2: H. Brendan McMahan Advances and Challenges in Private Cross-Device Federated Learning [Bio] [Abstract] [Video]

Brendan McMahan is a research scientist at Google, where he leads efforts on decentralized and privacy-preserving machine learning. His team pioneered the concept offederated learning, and continues to push the boundaries of what is possible when working with decentralized data using privacy-preserving techniques. Previously, he has worked in the fields of online learning, large-scale convex optimization, and reinforcement learning. Brendan received his Ph.D. in computer science from Carnegie Mellon University.

Privacy for users is a central goal of cross-device federated learning.This talk will begin with a broad view of privacy, highlighting key principles and threat models. We will then deep-dive into some recent advances in providing stronger privacy guarantees for cross-device federated learning, as well as highlight some perhaps under-appreciated challenges that arise when applying advanced techniques from differential privacy in the cross-device setting.

Dejing Dou
10:30AM – 10:50AM
Lightning Talk Session 1
10 papers, 2m each
Jianzong Wang
10:50AM – 11:20AM Keynote Talk 3: Ruslan Salakhutdinov Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation [Video]

Ruslan Salakhutdinov received his PhD in machine learning (computer science) from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Department of Computer Science and Department of Statistics. In February of 2016, he joined the Machine Learning Department at Carnegie Mellon University.Ruslan's primary interests lie in deep learning, machine learning, and large-scale optimization. His main research goal is to understand the computational and statistical principles required for discovering structure in large amounts of data. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Connaught New Researcher Award, Google Faculty Award, Nvidia's Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.

Many real-world applications such as robotics provide hard constraints on power and compute that limit the viable model complexity of Reinforcement Learning (RL) agents. Similarly, in many distributed RL settings, acting is done on unaccelerated hardware such as CPUs, which likewise restricts model size to prevent intractable experiment run times. These “actor-latency” constrained settings present a major obstruction to the scaling up of model complexity that has recently been extremely successful in supervised learning. To be able to utilize large model capacity while still operating within the limits imposed by the system during acting, we develop an “Actor-Learner Distillation” (ALD) procedure that leverages a continual form of distillation that transfers learning progress from a large capacity learner model to a small capacity actor model. As a case study, we develop this procedure in the context of partially-observable environments, where transformer models have had large improvements over LSTMs recently, at the cost of significantly higher computational complexity. With transformer models as the learner and LSTMs as the actor, we demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model while maintaining the fast inference and reduced total training time of the LSTM actor model.
(joint work with Emilio Parisotto)
Ameet Talwalkar
11:20AM – 11:50AM
Contributed Talk Session 2
FedML: A Research Library and Benchmark for Federated Machine Learning, Chaoyang He, Songze Li, Jinhyun So, Mi Zhang, Xiao Zeng, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram and Salman Avestimehr [Paper] [Poster] [Video]
Learning to Attack Distributionally Robust Federated Learning, Wen Shen, Henger Li and Zizhan Zheng [Paper] [Poster] [Video]
Yanzhi Wang
11:50AM – 12:20PM Keynote Talk 4: Virginia Smith On Heterogeneity in Federated Settings [Bio] [Abstract] [Video]

Virginia Smith is an assistant professor in the Machine Learning Department at Carnegie Mellon University, and a courtesy faculty member in the Electrical and Computer Engineering Department. Her research interests lie at the intersection of machine learning, optimization, and computer systems. A unifying theme of her research is to develop machine learning methods and theory that effectively leverage prior knowledge and account for practical constraints (e.g., hardware capabilities, network capacity, statistical structure). Specific topics include: large-scale machine learning, distributed optimization, resource-constrained learning, multi-task learning, transfer learning, and data augmentation.

A defining characteristic of federated learning is the presence of heterogeneity, i.e., that data and compute may differ significantly across the network. In this talk I show that the challenge of heterogeneity pervades the machine learning process in federated settings, affecting issues such as optimization, modeling, and fairness. In terms of optimization, I discuss FedProx, a distributed optimization method that offers robustness to systems and statistical heterogeneity. I then explore the role that heterogeneity plays in delivering models that are accurate and fair to all users/devices in the network. Our work here extends classical ideas in multi-task learning and alpha-fairness to large-scale heterogeneous networks, enabling flexible, accurate, and fair federated learning.

Yanzhi Wang
12:20PM – 12:36PM
Lightning Talk Session 2
8 papers, 2m each
Lingfei Wu
12:36PM – 01:30PM Poster Session 1 (Papers presented in the morning) Yanlin Zhou
01:30PM – 02:00PM Keynote Talk 5: John C. Duchi Some thoughts on Optimality in Optimization [Bio] [Abstract] [Video]

John Duchi is an assistant professor of Statistics and Electrical Engineering and (by courtesy) Computer Science at Stanford University. His work spans statistical learning, optimization, information theory, and computation, with a few driving goals. (1) To discover statistical learning procedures that optimally trade between real-world resources---computation, communication, privacy provided to study participants---while maintaining statistical efficiency. (2) To build efficient large-scale optimization methods that address the spectrum of optimization, machine learning, and data analysis problems we face, allowing us to move beyond bespoke solutions to methods that robustly work. (3) To develop tools to assess and guarantee the validity of---and confidence we should have in---machine-learned systems.
He has won several awards and fellowships. His paper awards include the SIAM SIGEST award for "an outstanding paper of general interest" and best papers at the Neural Information Processing Systems conference, the International Conference on Machine Learning, and an INFORMS Applied Probability Society Best Student Paper Award (as advisor). He has also received the Society for Industrial and Applied Mathematics (SIAM) Early Career Prize in Optimization, an Office of Naval Research (ONR) Young Investigator Award, an NSF CAREER award, a Sloan Fellowship in Mathematics, the Okawa Foundation Award, the Association for Computing Machinery (ACM) Doctoral Dissertation Award (honorable mention), and U.C. Berkeley's C.V. Ramamoorthy Distinguished Research Award.

I will survey and give perspective on a few recent (and some not so recent) results on optimality in optimization problems. These have some bearing on optimization problems arising in federated learning, where I will try to make a few polemical connections and suggestions for future work.
Dimitris Papailiopoulos
02:00PM – 03:00PM
Contributed Talk Session 3
On Biased Compression for Distributed Learning, Aleksandr Beznosikov, Samuel Horváth, Mher Safaryan and Peter Richtarik [Paper] [Poster] [Video]
PAC Identifiability in Federated Personalization, Ben London [Paper] [Poster] [Video]
Model Pruning Enables Efficient Federated Learning on Edge Devices, Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin Leung and Leandros Tassiulas [Paper] [Poster] [Video]
Hybrid FL: Algorithms and Implementation, Xinwei Zhang, Tianyi Chen, Mingyi Hong and Wotao Yin [Paper] [Poster] [Video]
Dapeng Wu
03:00PM – 03:30PM Break
03:30PM – 04:00PM Keynote Talk 6: Tao Yang: Privacy-aware Ranking for Document Search on the Cloud  [Video]

Tao Yang is a full Professor of Computer Science at University of California at Santa Barbara. His most recent research is in the areas of information retrieval and privacy. His past work has been in the areas of web search and parallel/distributed computing. He was Chief Scientist for Ask.com from 2001 to earlier 2010, and its Senior Vice President and Vice President of Engineering as the head of its search division. He visited Microsoft Bing for search technology R&D from 2010 to 2011.

As sensitive information is increasingly centralized into the cloud, for the protection of data privacy, such data is often encrypted, which makes effective data indexing and search a very challenging task. Hiding feature information through encryption prevents the server from performing effective scoring and result comparison. On the other hand, unencrypted feature values can lend themselves to privacy attacks. This talk will present our recent efforts in privacy-aware searching of large datasets with linear, tree-based, and neural ranking methods.
Xiaoyong Yuan
04:00PM – 04:20 PM
Lightning Talk Session 3
10 papers, 2m each
Yurong Chen
04:20PM – 04:50PM Keynote Talk 7: Tong Zhang [Bio] [Abstract] [Video]

Tong Zhang is a professor of Computer Science and Mathematics at The Hong Kong University of Science and Technology. Previously, he was a professor at Rutgers university, and worked at IBM, Yahoo, Baidu, and Tencent.
Tong Zhang's research interests include machine learning algorithms and theory, statistical methods for big data and their applications. He is a fellow of ASA, IEEE, and IMS, and he has been in the editorial boards of leading machine learning journals and program committees of top machine learning conferences.
Tong Zhang received a B.A. in mathematics and computer science from Cornell University and a Ph.D. in Computer Science from Stanford University.

Dan Meng
04:50PM – 05:00PM
Lightning Talk Session 4
5 papers, 2m each
Dan Meng
05:00PM – 06:00PM Panel Discussion

Panelist:

Liefeng Bo 

Dr. Bo is chief scientist and head of AI Lab at JD Digits. He and his team are developing advanced AI technologies to improve the products for better serving hundreds of millions of JD Digits’ customers. He was principal scientist at Amazon, where he led a research team at Amazon Go for building a new grab-and-go shopping experience using computer vision, deep learning and sensor fusion technologies.
Dr. Bo received his PhD from Xidian University in 2007. He was a post-doctoral researcher successively at the Toyota Technological Institute at the University of Chicago and the University of Washington. His research interests are in machine learning, deep learning, computer vision, robotics, and natural language processing. He published more than 60 papers in top conferences and journals with 9000+ Google Scholar citations. He won the National Excellent Doctoral Dissertation Award of China in 2010, and the Best Vision Paper Award in ICRA 2011.

Xiaolin Li 


Dr. Xiaolin Andy Li is a Partner of Tongdun Technology, heading the AI Institute and Cognization Lab. He and his team proposed and developed the Knowledge Federation (KF) framework, iBond KF platform, Federated Learning Exchange (FLEX) protocol, and InceptionAI open AI OS for trustworthy AI 3.0. He was a Professor and University Term Professor in Computer Engineering at the University of Florida. As the founding director, he founded National Science Foundation Center for Big Learning, the first national center on deep learning in the USA, along with dozens of colleagues at UF, CMU, UO, and UMKC, sponsored by NSF and over 30 industry members. He received a PhD degree in Computer Engineering from Rutgers University. His research interests include deep learning, cloud computing, security & privacy, IoT, FinTech, precision medicine, and logistics. He has published over 150 peer-reviewed papers in journals and conference proceedings, 5 books, and dozens of patents. He is a recipient of the NSF CAREER Award, NSF ICorp Top Team Award, Internet2 Innovative Application Award, and several best paper awards. He has been general/program chair for many conferences/workshops and associate editor for IEEE TPDS and JPDC.

Heiko Ludwig 

Heiko Ludwig is a Principle Research Staff Member and Senior Manager of the AI Platforms department in IBM’s Almaden Research Center in San Jose, CA. Heiko is leading research work on computational platforms for AI, focusing on security, privacy, performance and reliability of machine learning and inference. He leads IBM Research’s work on federated learning. Heiko has worked on problems of distributed systems and artificial intelligence in his career, publishing more than 100 refereed articles and conference papers. He is an ACM Distinguished Engineer and a managing editor of the International Journal of Cooperative Information Systems. The results of his work contributes to various IBM lines of business and open source projects. Prior to the Almaden Research Center, Heiko held different positions at IBM Research in Switzerland, the US, Argentina and Brazil. He holds a Master’s (Diplom) degree and a PhD in information systems from Otto-Friedrich University Bamberg, Germany.

Jason Martin 

Jason Martin is a Principal Engineer in the Security Solutions Lab and manager of the Secure Intelligence Team at Intel Labs. He leads a team of diverse researchers to investigate machine learning security and privacy in a way that incorporates the latest research findings and Intel products. Jason’s interests include machine learning, authentication and identity, trusted execution technology, wearable computing, mobile security, and privacy. Prior to Intel Labs he spent several years as a security researcher performing security evaluations and penetration tests on Intel’s products.

Panel Chairs
06:00PM – 07:00PM Poster Session 2 (Papers presented in the afternoon) Yanlin Zhou
07:00PM – 07:10PM Closing Remarks General/Program Chairs

 

 

• Paper List of Lightning Talks

10:30AM – 10:50AM Lightning Talk Session 1
  • A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization, Zhize Li and Peter Richtarik. [Paper] [Poster] [Video]
  • ESMFL: Efficient and Secure Models for Federated Learning, Sheng Lin, Chenghong Wang, Hongjia Li, Jieren Deng, Yanzhi Wang and Caiwen Ding. [Paper] [Poster] [Video]
  • F2ED-LEARNING: Good fences make good neighbors, Lun Wang, Qi Pang, Shuai Wang and Dawn Song. [Paper] [Poster] [Video]
  • FAT: Federated Adversarial Training, Giulio Zizzo, Ambrish Rawat, Mathieu Sinn and Beat Buesser. [Paper] [Poster] [Video]
  • Tailoring the adversarial objective to effectively backdoor federated learning, Xianzhuo Wang, Xing Hu, Deyuan He, Ning Lin, Jing Ye and Yunji Chen. [Paper] [Poster] [Video]
  • Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with Lazy Clients, Jun Li, Yumeng Shao, Ming Ding, Chuan Ma, Kang Wei, Zhu Han and H. Vincent Poor. [Paper] [Poster] [Video]
  • Secure Byzantine-Robust Machine Learning, Lie He, Praneeth Karimireddy and Martin Jaggi. [Paper] [Poster] [Video]
  • Byzantine-Robust Learning on Heterogeneous Datasets via Resampling, Lie He, Praneeth Karimireddy and Martin Jaggi. [Paper] [Poster] [Video]
12:20PM – 12:36PM Ligtning Talk Session 2
  • Selective Federated Transfer Learning using Representation Similarity, Tushar Semwal, Haofan Wang and Chinnakotla Krishna Teja Reddy. [Paper] [Poster] [Video]
  • GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators, Dingfan Chen, Tribhuvanesh Orekondy and Mario Fritz. [Paper] [Poster] [Video]
  • Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms, Maruan Al-Shedivat, Jennifer Gillenwater, Eric Xing and Afshin Rostamizadeh. [Paper] [Poster] [Video]
  • Communication-Efficient Federated Learning via Dataset Distillation, Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li and Dapeng Wu. [Paper] [Poster] [Video]
  • Federated Bandit: A Gossiping Approach, Zhaowei Zhu, Jingxuan Zhu, Ji Liu and Yang Liu. [Paper] [Poster] [Video]
  • Central Server Free Federated Learning over Single-sided Trust Social Networks, Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu and Ji Liu. [Paper] [Poster] [Video]
  • Optimal Gradient Compression for Distributed and Federated Learning, Alyazeed Albasyoni, Mher Safaryan, Laurent Condat and Peter Richtárik. [Paper] [Poster] [Video]
  • Federated Learning of a Mixture of Global and Local Models, Filip Hanzely and Peter Richtarik. [Paper] [Poster] [Video]
04:00PM – 04:20 PM Ligtning Talk Session 3
  • ByGARS: Byzantine SGD with Arbitrary Number of Attackers, Jayanth Regatti, Hao Chen and Abhishek Gupta. [Paper] [Poster] [Video]
  • Heterogeneity for the Win: Communication Efficient Federated Clustering, Don Dennis and Virginia Smith. [Paper] [Poster] [Video]
  • Differentially-Private Federated Linear Bandits, Abhimanyu Dubey and Alex Pentland. [Paper] [Poster] [Video]
  • Secure Federated Feature Selection for Cross-Feature Federated Learning, Fucheng Pan, Dan Meng, Yu Zhang, Hongyu Li and Xiaolin Li. [Paper] [Poster] [Video]
  • Collusion-free Cross-feature Logistic Regression, Xiaojuan Wang, Zhihui Fu, Dan Meng, Xiaochuan Peng, Hong Wang, Hongyu Li and Xiaolin Li. [Paper] [Poster] [Video]
  • Federated Multi-Task Learning for Competing Constraints, Tian Li, Shengyuan Hu, Ahmad Beirami and Virginia Smith. [Paper] [Poster] [Video]
  • A Secure and Efficient Sample Filtering Protocol for Massive Data, Linghui Chen, Xiqiang Dai, Hong Wang, Yuanyuan Cen, Xiaochuan Peng, Hongyu Li and Xiaolin Li. [Paper] [Poster] [Video]
  • Towards Optimized Model Poisoning Attacks Against Federated Learning, Virat Shejwalkar and Amir Houmansadr. [Paper] [Poster] [Video]
04:50PM – 05:00PM Ligtning Talk Session 4
  • Understanding Gradient Clipping in Private SGD: A Geometric Perspective, Xiangyi Chen, Zhiwei Steven Wu and Mingyi Hong. [Paper] [Poster] [Video]
  • Label Leakage and Protection in Two-party Split Learning, Oscar Li, Jiankai Sun, Weihao Gao, Hongyi Zhang, Xin Yang, Junyuan Xie and Chong Wang. [Paper] [Poster] [Video]
  • Trade-offs of Local SGD at Scale: An Empirical Study, Jose Javier Gonzalez Ortiz, Jonathan Frankle, Michael Rabbat, Ari Morcos and Nicolas Ballas. [Paper] [Poster] [Video]
  • Explainable Link Prediction for Privacy-Preserving Contact Tracing, Balaji Ganesan, Hima Patel and Sameep Mehta. [Paper] [Poster] [Video]
  • Learning Privately over Distributed Features: An ADMM Sharing Approach, Yaochen Hu, Peng Liu, Keshi Ge, Linglong Kong, Bei Jiang and Di Niu. [Paper] [Poster] [Video]

 

• Poster Sessions

  • Poster Session 1 includes all papers presented before 12:36 PM
  • Poster Session 2 includes all papers presented before 18:00 PM

 

 

 

Need to find us?Contact

Copyright © All rights reserved | This website is hosted  by Tongdun Technology