350 rub
Journal Neurocomputers №12 for 2010 г.
Article in number:
Dynamic formation of radial-basis neural network structure
Authors:
V. N. Vichugov, M. S. Sukhodoev, G. P. Tsapko
Abstract:
There are some difficulties in using multilayer perceptrons for building the neural network model of the plant in automatic control problems due to the fact that the additional training of multilayer perceptron in some parts of the workspace leads to the loss of trained state in the entire workspace of the neural network. This drawback doesn-t exist in the radial-basis neural network (RBNN) because every element of this network effects on the output value primarily in a limited section of the workspace which is characterized by the position of the element center and parameter  that is called the width of the radial function. Gradient algorithm is used for training RBNN. It is based on the minimization of the network error function. In accordance with this algorithm changing values of the weight parameter Δwi, the element width Δσi and element center coordinates Δcij are calculated for each element. The experiments revealed some drawbacks of the classical gradient learning algorithm: the learning algorithm doesn-t include rules for choosing the initial state of the network elements; the need to change parameters of all network elements leads to significant computational costs; RBNN can not reach a steady state during training if there are elements with similar values of the parameters cij and i. We introduced the coefficient of mutual intersection of elements in order to exclude last drawback. In order to calculate this coefficient for some element one need to find the second element whose center is the closest to the center of the selected element. The coefficient of mutual intersection is defined as the sum of the output value of the current element in the center of the second element and the output value of the second element in the center of the current element. In the experiments on the approximation of various functions using RBNN it was determined that one should limit the maximal value of mutual intersection to value 1.9 in order to maximize the training quality. The modified gradient learning algorithm was developed for eliminating the drawbacks of the classical one. The differences from the classical algorithm are the following: The rules for building RBNN structure during the training process are defined. Computational cost required for each training cycle is reduced. The possibility of building structure with identical element parameters is excluded. Example of two-dimensional function approximation demonstrates that the modified gradient learning algorithm is capable to generate RBNN structure.
Pages: 7-11
References
  1. Хайкин С. Нейронные сети: полный курс. 2-е изд. / Под ред. Н. Н. Куссуль. М.: Издат. дом «Вильямс». 2006.
  2. Осовский С. Нейронные сети для обработки информации: Пер. с польск. М.: Финансыистатистика. 2002.
  3. Jianyu, L., Luo Siwei, Qi Yingjiana, Huang Yapinga, Numerical solution of elliptic partial differential equation using radial basis function neural networks // Neural Networks. 2003. № 5/6. P. 729-734.