Journal Neurocomputers №2 for 2021 г.
Article in number:
Methods of automated hyperparameters tuning of a convolutional neural network
Type of article: scientific article
DOI: https://doi.org/10.18127/j19998554-202102-03
UDC: 004.93'1
Authors:

O.N. Cheremisinova, V.S. Rostovtsev

Vyatka State University (Kirov, Russia)

Abstract:

In any convolutional neural network (CNN), there are hyperparameters - parameters that are not configured during training, but are set at the time of building the СNN model. Their choice affects the quality of the neural network. To date, there are no uniform rules for setting parameters. Hyperparameters can be adjusted fairly accurately using manual tuning. There are also automatic methods for optimizing hyperparameters. Their use reduces the complexity of the neural network tuning, and does not require experience and knowledge of hyperparameter optimization. The purpose of this article is to analyze automatic methods for selecting hyperparameters to reduce the complexity of the process of tuning a CNN.

Optimization methods. Several automatic methods for selecting hyperparameters are considered: grid search, random search, modelbased optimization (Bayesian and evolutionary). The most promising are methods based on a certain model. These methods are used in the absence of an expression for the objective optimization function, but it is possible to obtain its observations (possibly with noise) for the selected values. 

Bayesian theory involves finding a trade-off between exploration (suggesting hyperparameters with high uncertainty that can give a noticeable improvement) and use (suggesting hyperparameters that are likely to work as well as what she has seen before – usually values that are very close to those observed before).

Evolutionary optimization is based on the principle of genetic algorithms. A combination of hyperparameter values is taken as an individual of a population, and recognition accuracy on a test sample is taken as a fitness function. By crossing, mutation and selection, the optimal values of the neural network hyperparameters are selected. The authors have proposed a hybrid method, the algorithm of which combines Bayesian and evolutionary optimization. At the beginning, the neural network is tuned using the Bayesian method, then the first generation in the evolutionary method is formed from the N best options of parameters, which further continues the neural network tuning. 

An experimental study of the optimization of hyperparameters of a convolutional neural network by Bayesian, evolutionary and hybrid methods is carried out. In the process of optimization by the Bayesian method, 112 different architectures of the convolutional neural network were considered, the root-mean-square error on the validation set of which ranged from 1629 to 11503. As a result, the CNN with the smallest error was selected, the RMSE of which on the test data was 55. At the beginning of evolutionary optimization, they were randomly 8 different CNN architectures were generated with the root mean square error on the validation data from 2587 to 3684. In the process of optimization by this method, within 14 generations, CNNs were obtained with new sets of hyperparameters, the error on the validation data of which decreased to values from 1424 to 1812. As a result, the CNN with the smallest error was selected, the RMSE of which was 48 on the test data. The hybrid method combines the advantages of both methods and allows finding an architecture no worse than the Bayesian and evolutionary methods. When optimizing by this method, the optimal architecture of the CNN was obtained (the architecture in which the CNN on the validation data has the smallest root-mean-square error), the RMSE of which on the test data was 49. The results show that the quality of optimization for all three methods is approximately the same.

Bayesian approach considers the entire hyperparameter space. To obtain greater accuracy with the Bayesian method, you need to increase the CNN optimization time with this method. The evolutionary algorithm selects the best combinations of hyperparameters from the initial population, so the initially generated population plays a big role. In addition, due to the peculiarities of the algorithm, this method is prone to falling into a local extremum. However, this algorithm is well parallelized, so the optimization process with this method can be accelerated. The hybrid method combines the advantages of both methods and allows you to find an architecture that is no worse than Bayesian and evolutionary methods.

The experiments carried out show that the considered optimization methods on problems similar to the one considered will achieve approximately the same quality of neural network tuning with a relatively small size of the CNN. The presented results make it possible to choose one of the considered methods for optimizing hyperparameters when developing a CNN, based on the specifics of the problem being solved and the available resources.

Pages: 26-34
For citation

Cheremisinova O.N., Rostovtsev V.S. Methods of automated hyperparameters tuning of a convolutional neural network. Neurocomputers. 2021. V. 23. № 2. Р. 26−34. DOI: https://doi.org/10.18127/j19998554-202102-03 (in Russian).

References
  1. Rostovtsev V.S., Cheremisinova O.N. Povysheniye kachestva raspoznavaniya izobrazheniy podborom parametrov svertochnoy neyronnoy seti. 52-ya Mezhdunar. nauch. konf. «Evraziyskoye Nauchnoye Obyedineniye». 2019. № 6(52). Ch. 2. S. 114-118 (in Russian).
  2. Gudfellou Ya., Bendzhio I., Kurvill A. Glubokoye obucheniye. Izd. 2-e. ispr. M.: DMK Press. 2018. 652 s.
  3. Bergstra J., Bengio Y. Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research. 2012. V. 13. P. 281-305.
  4. Gorshenin A.K., Kuzmin V.Yu. Optimizatsiya giperparametrov neyronnykh setey s ispolzovaniyem vysokoproizvoditelnykh vychisleniy dlya predskazaniya osadkov. Informatika i eye primeneniya. 2019. T. 13. Vyp. 1. S. 75–81 (in Russian).
  5. Brochu E., Cora V.M., de Freitas N. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. CoRR. arXiv preprint arXiv. Dec. 2010. 1012.2599.
  6. Frazier P.I. A Tutorial on Bayesian Optimization. CoRR. arXiv preprint arXiv: 2018. 1807.02811.
  7. Kim J.-Y., Cho S.-B. Evolutionary Optimization of Hyperparameters in Deep Learning Models. 2019 IEEE Congress on Evolutionary Computation (CEC). 2019. P. 831-837.
  8. Sun Y., Xue B., Zhang M. Automatically evolving CNN architectures based on blocks. CoRR, arXiv preprint arXiv. 2018. 1810.11875.
  9. Dorigo M., Maniezzo M.V. Parallel genetic algorithms: Introduction and overview of current research. Parallel Genetic Algorithms / ed. J. Stender. IOS Press. 1993. P. 5-42.
  10. Svidetelstvo o registratsii programmy № 2021610729 ot 27.01.2021 «Programma avtomaticheskogo podbora giperparametrov svertochnoy neyronnoy seti»: programma dlya EVM. Cheremisinova O.N., Rostovtsev V.S. (in Russian).
Date of receipt: 05.02.2021
Approved after review: 25.02.2021
Accepted for publication: 13.03.2021