FPGA-accelerated dense linear machine learning: A precision-convergence trade-off Conference Paper


Author(s): Kara, Kaan; Alistarh, Dan; Alonso, Gustavo; Mutlu, Onur; Zhang, Ce
Title: FPGA-accelerated dense linear machine learning: A precision-convergence trade-off
Affiliation
Abstract: Stochastic gradient descent (SGD) is a commonly used algorithm for training linear machine learning models. Based on vector algebra, it benefits from the inherent parallelism available in an FPGA. In this paper, we first present a single-precision floating-point SGD implementation on an FPGA that provides similar performance as a 10-core CPU. We then adapt the design to make it capable of processing low-precision data. The low-precision data is obtained from a novel compression scheme - called stochastic quantization, specifically designed for machine learning applications. We test both full-precision and low-precision designs on various regression and classification data sets. We achieve up to an order of magnitude training speedup when using low-precision data compared to a full-precision SGD on the same FPGA and a state-of-the-art multi-core solution, while maintaining the quality of training. We open source the designs presented in this paper.
Keywords: Machine learning; Dense linear models; FPGA; Low-precision; Stochastic gradient descent; Training
Conference Title: FCCM: Field-Programmable Custom Computing Machines
Conference Dates: April 30 - May 2, 2017
Conference Location: Napa, CA, USA
ISBN: 978-153864036-4
Publisher: IEEE  
Date Published: 2017-06-30
Start Page: 160
End Page: 167
DOI: 10.1109/FCCM.2017.39
Open access: no
IST Austria Authors
Related IST Austria Work