Session 07 - Department of Statistics - Purdue University Skip to main content

Recent Advances in Federated Learning

Organizer: Qifan Song, Associate Professor of Statistics, Purdue University

Speakers

  • Wei-Lun Chao, Assistant Professor, Computer Science and Engineering, Ohio State University
  • Hamed Hassani, Assistant Professor, Department of Electrical and Systems Engineering, University of Pennsyvania
  • Tianyi Zhou, Assistant Professor, Department of Computer Science, University of Maryland

Speaker Title
Wei-Lun Chao Interesting Research Problems in Federated Learning

Abstract:  Modern machine learning models are data-hungry, often requiring a large-scale centralized dataset for training. Collecting such a dataset, however, has significant privacy, security, and ownership concerns. Federated learning is a framework to bypass centralized data, in which individual users keep their data and communicate with a server to train a model collaboratively. While promising, there is still a noticeable performance gap between federated learning and centralized learning, demanding more research efforts to bridge the gap. In this talk, I will share several interesting, rarely explored, but highly effective aspects to improve federated learning. These include how to aggregate individual users' local models, incorporate normalization layers, and take advantage of pre-training in federated learning. Federated learning introduces not only challenges but also opportunities. Specifically, the different data distributions among users enable us to learn how to personalize a model. In the second part of the talk, I will share our recent research progress toward personalized federated learning.

Hamed Hassani Collaborative Decision-Making under Adversarial and Information Constraints
Abstract: Modern distributed computing paradigms, such as federated learning,  aim to leverage data available on multiple devices to design accurate models for prediction and decision-making. However, to reap the benefits of collaboration, one needs to overcome significant challenges pertaining to adversarial robustness and communication bottlenecks. While these challenges have been extensively studied in supervised learning, they remain much less understood in the context of sequential decision-making problems. To bridge this gap, we study the interplay between sample complexity, communication-efficiency, and adversarial robustness for the linear stochastic bandit setting. The talk will cover two recent results: The first result is on the linear bandit problem involving multiple agents that can collaborate via a central server to minimize regret, where a fraction of the agents act adversarially. We provide fundamental limits for this problem along with collaborative algorithms that achieve those limits. In the second result, we formulate a variant of the linear bandit problem involving communication over a bit-constrained channel, and provide algorithms that achieve optimal regret with minimal communication complexity.  
Tianyi Zhou Structured Decentralized Learning: Client Selection, Clustering, Cooperation, and Personalization

Abstract: Federated learning (FL) and Decentralized learning (DL) keep data and main computation on local clients, which are critical for data privacy protection and AI democratization. However, the statistical heterogeneity among clients usually hinders the convergence of FL/DL and leads to poor personalization performance. How to effectively share knowledge across clients and meanwhile avoid their conflicts in gradient/model aggregation? We propose to address this challenge by modeling and leveraging the underlying structures across clients. To this end, we study structured decentralized learning, which jointly learns the models and a parameterized structure of knowledge sharing in FL/DL settings. In particular, this talk will introduce our recent work on diverse client selection, client clustering, client collaboration and cooperation with graphs, federated recommendation systems, and novel personalization approaches developed for these problems. In addition, I will discuss new challenges of DL/FL when applied with recent large language models and prompting frameworks.

Purdue Department of Statistics, 150 N. University St, West Lafayette, IN 47907

Phone: (765) 494-6030, Fax: (765) 494-0558

© 2023 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.