QSGD: Communication-efficient SGD via gradient quantization and encoding Conference Paper


Author(s): Alistarh, Dan; Grubic, Demjan; Li, Jerry Z; Tomioka, Ryota; Vojnović, Milan
Title: QSGD: Communication-efficient SGD via gradient quantization and encoding
Title Series: Advances in Neural Information Processing Systems
Affiliation IST Austria
Abstract: Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8 × faster than the full-precision variant.
Conference Title: NIPS: Neural Information Processing System
Volume: 30
Conference Dates: December 4-9, 2017
Conference Location: Long Beach, CA, USA
Publisher: Neural Information Processing Systems Foundation, Inc.  
Date Published: 2017-01-01
URL:
Notes: The authors would like to thank Martin Jaggi, Ce Zhang, Frank Seide and the CNTK team for their support during the development of this project, as well as the anonymous NIPS reviewers for their careful consideration and excellent suggestions. Dan Alistarh was supported by a Swiss National Fund Ambizione Fellowship. Jerry Li was supported by the NSF CAREER Award CCF-1453261, CCF-1565235, a Google Faculty Research Award, and an NSF Graduate Research Fellowship. This work was developed in part while Dan Alistarh, Jerri Li and Milan Vojnovic were with Microsoft Research Cambridge, UK. An open access version of this converence paper is available via: https://arxiv.org/abs/1610.02132
Open access: yes (repository)
IST Austria Authors