350 rub
Journal Neurocomputers №10 for 2009 г.
Article in number:
One layer self-organized spiking neural network recognizing synchrony structure in input signal
Authors:
M. V. Kiselev
Abstract:
Research efforts aimed at development of spiking neural networks have two aspects: modeling real neuron ensembles in brain of animals or humans and application of this kind of neural networks to practically important problems such as dynamic pattern recognition or automated control in complex systems (e.g. in robotics). In both classes of applications it is assumed that network has a great number of informational inputs (receptors) and that the signals arriving from the receptors have some non-trivial spatiotemporal structure. Processing these signals the network itself evolves and the main goal of the evolution laws is growth of populations of neurons inside it which could detect various structural proper-ties of the signal. One of the most important structural features of the signal is presence of time intervals when some groups of receptors demonstrate synchronous activity. Depending on time scale characterizing the synchrony it may be called phase synchrony (when single spikes from a group of receptors arrive in a narrow time interval) or frequency synchrony (when frequency of spikes on some receptors becomes higher than the average level in the same periods of time). Recognition of other kinds of spatiotemporal structure often can be based on recognition of signal synchrony, for example, using axonal delays. Speaking of the synchrony detection we mean the following. Initially the network has a certain starting configuration con-taining no a priori knowledge of signal synchrony properties. During learning period it receives a signal with episodically occurring receptor synchrony. After this period the network should demonstrate the following properties: 1) For each synchronous receptor group there should exist a group of neurons (recognizing neurons) whose mean spike frequency during the corresponding synchrony periods is many times greater than the mean frequency outside these pe-riods. 2) One recognizing neuron group should react this way on episodes of synchronous firing of one synchronous receptor group only. 3) The correspondence between recognizing neurons and synchronous receptor groups should not change in time. Complexity of the problem in our case is rooted in its generality. We would like to utilize the same network architecture for recognition of phase and frequency synchrony. The network should learn to detect synchrony when its characteristic time, frequency of synchrony periods, number and size of synchronous receptor groups etc. are unknown and may vary in wide ranges (orders of magnitude). Necessity for fine tuning of network parameters to the given input signal (and possibly their adjustment if the signal properties change in time) may be one of explanation why applications of spiking networks to solution of practical problem are relatively few until now.. It is shown in this work that this problem can be solved efficiently by a one-layer network consisting of neurons similar to the widely used leaky integrate-and-fire (LIF) neurons. Neurons in this network form groups so that the neurons belonging to a group are connected with all neurons in all other groups by inhibitory links. As in majority of other models learning of the network is implemented using a synaptic plasticity mechanism similar to STDP (Spike-Timing-Dependent Plasticity). Principal distinctive features of my model are the following: - Beside the two states of standard LIF neuron, firing and no firing, neuron in my model can be in the state "suppressed firing". It corresponds to a situation when total signal on excitatory synapses exceeds firing threshold but strong signal on inhibitory synapses prevents postsynaptic spike generation. In this case my version of STDP laws decreases weights of the excitatory synapses whose contribution to membrane potential was great. The purpose of this mechanism is maintenance of neuron diversity in the network. Without it all neurons would learn to recognize only the groups of receptors characterized by the strongest synchrony. - The neurons have an additional component of their state called stability. It determines their synaptic plasticity (so that the neurons with high stability are not plastic at all) and reflects their degree of synchrony recognition learning. Its purpose is to guarantee that the acquired ability to recognize synchrony will not be destroyed later by STDP. - Several homeostatic mechanisms are realized whose goal is to keep the network in the working regime in spite of strong variations of receptor signal and to make possible synchrony recognition under a great variety of conditions. These mechanisms regulate total mean firing frequency of neurons and relative strength of lateral inhibitory connections. The performed computational experiments demonstrated the ability of the proposed network model to recognize reliably synchrony structure of receptor signal under great variety of conditions.
Pages: 3-12
References
  1. Gerstner W., Kistler W. Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press. 2002.
  2. Maass W., Markram H. On the computational power of circuits of spiking neurons //Journal of Computer and System Sciences. 2004. 69. Р.593-616.
  3. Izhikivech E. Polychronization:Computation with Spikes // Neural Computation. 2006. 18. Р. 245-282.
  4. Kiselev M. Statistical Approach to Unsupervised Recognition of Spatio-temporal Patterns by Spiking Neurons. ProceedingsofIJCNN. 2003. Р. 2843-2847.
  5. Киселев M. SSNUMDL - сеть стабилизирующихся импульсных нейронов, распознающих пространственно-временные образы // Нейрокомпьютер. 2005. №12. С. 16-24.
  6. Legenstein R., Pecevski D., Maass W. Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity. ProceedingsofNIPS-07. 2007. Р. 881-888.