Presentation on efficient scalable hyperparameter optimization.

I’ll be presenting at REWORK Deep Learning Summit Singapore on 27-28 April 2017!

Abstract
With every deep learning algorithm comes a set of hyperparameters. Optimizing them is crucial in achieving faster convergence and lower error rates. For many years, the majority of people in the deep learning community has been using common heuristics to tune hyperparameters such as learning rates, decay rates and L2 regularization. In recent works, researchers have attempted to cast hyperparameter optimization as a deep learning problem but they are limited by their lack of scalability. I show how it is now possible for scalable hyperparameter optimization that accelerates convergence that can be trained on one problem while enjoying the benefits of transfer learning that is scalable. This has impact on the industrial level where deep learning algorithms can be accelerated to convergence without manual hand-tuning even for large models.