ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning Conference Paper


Author(s): Zhang, Hantian; Li, Jerry; Kara, Kaan; Alistarh, Dan; Liu, Ji; Zhang, Ce
Title: ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning
Title Series: PMLR Press
Affiliation IST Austria
Abstract: Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-of-magnitude speedups? We mainly focus on linear models, and the answer is yes for linear models. We develop a simple framework called ZipML based on one simple but novel strategy called double sampling. Our ZipML framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quanti- zation would introduce significant bias. We val- idate our framework across a range of applica- tions, and show that it enables an FPGA proto- type that is up to 6.5 × faster than an implemen- tation using full 32-bit precision. We further de- velop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7 × in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-the- art XNOR-Net.
Conference Title: ICML: International Conference on Machine Learning
Volume: 70
Conference Dates: August 6-11, 2017
Conference Location: Sydney, Australia
Publisher: PMLR  
Date Published: 2017-01-01
Start Page: 4035
End Page: 4043
URL:
Notes: CZ gratefully acknowledges the support from the Swiss National Science Foundation NRP 75 407540 167266, NVIDIA Corporation for its GPU donation, and Microsoft Azure for Research award program.
Open access: no
IST Austria Authors
Related IST Austria Work