Asynchronous kernel ridgeless regression

Image credit: Unsplash

Abstract

Kernel machines are a class of predictors commonly used in machine learning. We propose a parallel and completely lock-free asynchronous algorithm, AsyncEigenPro, for kernel regression. This algorithm resembles Hogwild! but uses the special structure of the kernel regression problem. The main application of the algorithm is to enable efficient multi-GPU training for kernel methods. We show theoretically that the effect of delayed gradients and inconsistent reads on the rate of convergence can be minimal. We run large scale experiments to show near linear speedup in training time with respect to the number of GPUs.

Create your slides in Markdown - click the Slides button to check out the example.

Supplementary notes can be added here, including code, math, and images.

Vijay Giri
Vijay Giri
Master’s student