Papers
This page contains some select papers and publications. For a full, up-to-date list, see my Google Scholar profile.
2025
Communication-Efficient Language Model Training Scales Reliably and Robustly: Scaling Laws for DiLoCo
Preprint.
Zachary Charles, Gabriel Teston, Lucio Dery, Keith Rush, Nova Fallen, Zachary Garrett, Arthur Szlam, Arthur Douillard
Scaling Laws for Differentially Private Language Models
Preprint.
Ryan McKenna, Yangsibo Huang, Amer Sinha, Borja Balle, Zachary Charles, Christopher A Choquette-Choo, Badih Ghazi, George Kaissis, Ravi Kumar, Ruibo Liu, Da Yu, Chiyuan Zhang
Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch
Preprint.
Arthur Douillard, Yanislav Donchev, Keith Rush, Satyen Kale, Zachary Charles, Zachary Garrett, Gabriel Teston, Dave Lacey, Ross McIlroy, Jiajun Shen, Alexandre Ramé, Arthur Szlam, Marc’Aurelio Ranzato, Paul Barham
Fine-Tuning Large Language Models with User-Level Differential Privacy
SatML, 2025.
Zachary Charles, Arun Ganesh, Ryan McKenna, H Brendan McMahan, Nicole Mitchell, Krishna Pillutla, Keith Rush
2024
DrJAX: Scalable and Differentiable MapReduce Primitives in JAX
WANT Workshop, ICML, 204.
Keith Rush, Zachary Charles, Zachary Garrett, Sean Augenstein, Nicole Mitchell
Federated Automatic Differentiation
JMLR, 2024.
Keith Rush, Zachary Charles, Zachary Garrett
Leveraging Function Space Aggregation for Federated Learning at Scale
TMLR, 2024.
Nikita Dhawan, Nicole Mitchell, Zachary Charles, Zachary Garrett, Gintare Karolina Dziugaite
2023
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning
NeurIPS, 2023.
Zachary Charles, Nicole Mitchell, Krishna Pillutla, Michael Reneer, Zachary Garrett
Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy
NeurIPS, 2023.
Anastasiia Koloskova, Ryan McKenna, Zachary Charles, Keith Rush, Brendan McMahan
A Rate–Distortion View on Model Updates
ICLR, 2023.
Nicole Mitchell, Jona Ballé, Zachary Charles, Jakub Konečný
2022
Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning
Preprint.
Zachary Charles, Kallista Bonawitz, Stanislav Chiknavaryan, Brendan McMahan, Blaise Agüera y Arcas
Motley: Benchmarking Heterogeneity and Personalization in Federated Learning
Workshop on Federated Learning, NeurIPS, 2022.
Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ken Liu, Zheng Xu, Virginia Smith
Iterated Vector Fields and Conservatism, with Applications to Federated Learning
ALT, 2022.
Zachary Charles, Keith Rush
Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory
Preprint.
Nicole Mitchell, Jona Ballé, Zachary Charles, Jakub Konečný
Does Federated Dropout Actually Work?
CVPR Workshop, 2022.
Gary Cheng, Zachary Charles, Zachary Garrett, Keith Rush
2021
On Large-Cohort Training for Federated Learning
NeurIPS, 2021.
Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian, Virginia Smith
Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning
AISTATS, 2021.
Zachary Charles, Jakub Konečný
Adaptive Federated Optimization
ICLR, 2021.
Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, Brendan McMahan
A Field Guide to Federated Optimization
Preprint.
Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, et al.
Local Adaptivity in Federated Learning: Convergence and Consistency
Preprint.
Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, Gauri Joshi
Advances and Open Problems in Federated Learning
Peter Kairouz, Brendan McMahan, et al. (including Zachary Charles)
2020 and earlier
Please see my Google Scholar profile.