Efficient Learning Rate Adaptation for Convolutional Neural Network Training
Fecha
2019Language
en
Materia
Resumen
Convolutional Neural Networks (CNNs) have been established as substantial supervised methods for classification problems in many research fields. However, a large number of parameters have to be tuned to achieve high performance and good classification results. One of the most crucial parameter for the performance of a CNN is the learning rate (step) of the training algorithm. Although the heuristic search to tune the learning rate is a common practice, it is extremely time-consuming, considering the fact that CNNs require a significant amount of time for each training, due to their complex architectures and high number of weights. Approaches that integrate the adaptation of the initial learning rate in the optimization algorithm, manage to converge to high quality solutions and have been embraced by the research community. In this work, we propose an improvement of the recently proposed Adaptive Learning Rate algorithm (AdLR). The proposed learning rate adaptation algorithm (e-AdLR) exhibits excellent convergence properties and classification accuracy, while at the same time is fast and robust. © 2019 IEEE.