350 rub
Journal Information-measuring and Control Systems №5 for 2022 г.
Article in number:
Functional transformations of images in neural networks
Type of article: scientific article
DOI: https://doi.org/10.18127/j20700814-202205-04
UDC: 004.032.26
Authors:

V.V. Kvashennikov1

1 Kaluga branch of Bauman Moscow State Technical University (Kaluga, Russia)

Abstract:

The article describes the functional transformations of images in neural networks. The input effects of the network are represented by codes, the allowed sequences of which correspond to the images, and the forbidden sequences divide the images among themselves. In the process of training a neural network, image codes are recorded in the network memory. Network memory is determined by the synaptic connections of the network and their weights. An image is a sequence of codes in the network memory. The poorly formalized concept of the meaning of the image is replaced by the concept of the presence or absence of the image code in memory. If the image code is in the network memory, then the sequence of characters makes sense. The allowed sequences of the image form the sphere of the image. The large redundancy of the image sphere makes it possible to reliably identify images, even with errors, erasures and invariant transformations of input sequences. Functional image transformations are customizable universal transformations. In many applications, neural networks are considered as an effective means of solving the problem of function approximation. The nonlinear characteristic of a neuron can be arbitrary: from sigmoidal to an arbitrary wave packet or wavelet, sine or polynomial. The complexity of the network depends on the choice of a non-linear function, but with any nonlinearity, the network approximates any functional dependence fairly accurately. Neural networks are trainable functional converters. In order to perceive images, they must be known in advance, or, if they are new images, they must be significant enough for the neural network to remember them and they become known to the network. Any sequences of symbols can be received at the input, but only those sequences on which the neural network is trained are perceived. Functional transformations of images in neural networks are invariants to some classes of transformations. For example, recognition of visual images is invariant to affine transformations (scaling, shifting, rotation), to chromaticity, brightness and contrast. The invariance of images to errors and erasure of symbols allows us to consider the input codes of images as noise-resistant codes, and with some approximation - stochastic (random) codes. A neural network is a universal, trainable and invariant functional converter of an input redundant sequence into an effective processing code. The effective image code is also obtained by decoding the input image code or by recognizing the image. Effective image codes are combined and used as input information for the following functional transformations of a multilayer neural network. As a result of transformations, effective image codes are obtained, the functional transformations of which are simpler than the functional transformations of input images. The combination of effective image codes will correspond to a redundant sequence of characters, the functional transformation of which gives an effective code of a complex image. Combining images is also an image that corresponds to its effective code at the output. Multilevel hierarchical construction of complex images based on simple or other complex images allows you to represent complex images in the form of generalized cascade codes. Decoding of generalized case codes with a small power of component codes can be performed using functional transformations of images in neural networks. Image fragmentation makes it easier to obtain effective image code and increases the image compression ratio.

Pages: 15-24
For citation

Kvashennikov V.V. Functional transformations of images in neural networks. Information-measuring and Control Systems. 2022. V. 20. № 6. P. 15−24. DOI: https://doi.org/10. 18127/j20700814-202205-04 (in Russian)

References
  1. Galushkin A.I. Nejronnye seti: osnovy teorii. Pod red. A.I. Galushkina. M.: IPRRZhR. 2009. 480 s. (in Russian).
  2. Kvashennikov V.V. Sravnitel'nyj analiz sistem biologicheskoj i komp'juternoj pamjati. Izvestija Instituta inzhenernoj fiziki 2021. №1(59). S. 37-41 (in Russian).
  3. Gorban' A.N. Obobshhennaja approksimacionnaja teorema i vychislitel'nye vozmozhnosti nejronnyh setej. Sibirskij zhurnal vychislitel'noj matematiki. 1998. T. 1. № 1. S. 12-24 (in Russian).
  4. Kallan R. Osnovnye koncepcii nejronnyh setej. Per. s angl. M.: «Vil'jams». 2001. 288 s. (in Russian).
  5. Kruglov V.V., Borisov V.V. Iskusstvennye nejronnye seti. Teorija i praktika. M.: Gorjachaja linija – Telekom. 2001. 382 s. (in Russian).
  6. Savel'ev A.V. Na puti k obshhej teorii nejrosetej. K voprosu slozhnosti. Nejrokomp'jutery: razrabotka, primenenie. 2006. № 4-5.
    S. 4-14 (in Russian).
  7. Hachumov V.M. O rasshirenii funkcional'nyh vozmozhnostej iskusstvennyh nejronnyh setej. Aviakosmicheskoe priborostroenie. 2008. № 5. S. 53–59 (in Russian).
  8. Nikolenko S., Kadurin A., Arhangel'skaja E. Glubokoe obuchenie. SPb: Piter. 2018. 480 s. (in Russian).
  9. Abramov N.S., Fralenko V.P., Hachumov M.V. Obzor metodov raspoznavanija obrazov na osnove invariantov k jarkostnym i geometricheskim preobrazovanijam. Sovremennye naukoemkie tehnologii. 2020. № 6-1. S. 110-117 (in Russian).
  10. Gibson U. Raspoznavanie obrazov. OOO «Izdatel'skaja Gruppa «Azbuka-Attikus». 2015. 384 s. (in Russian).
  11. Hajkin S. Nejronnye seti. Izd. 2-e. M.: Vil'jams. 2006. 1104 s. (in Russian).
  12. Osmolovskij S.A. Stohasticheskie metody zashhity informacii. M.: Radio i svjaz'. 2003. 320 s. (in Russian).
  13. Shamsimuhametov D., Andreev K., Frolov A. Issledovanie metodov dekodirovanija na osnove glubinnyh nejronnyh setej. Sb. trudov 42-j Mezhdisciplinarnoj shkoly-konf. IPPI RAN «Informacionnye tehnologii i sistemy». 2018. S. 208-218 (in Russian).
  14. Zuj T.N. Invarianty v zadachah raspoznavanija graficheskih obrazov. Vestnik RUDN. Ser. Matematika. Informatika. Fizika. 2016. № 1. S. 76–85 (in Russian).
  15. Abramov N.S., Hachumov V.M. Raspoznavanie na osnove invariantnyh momentov. Vestnik Rossijskogo universiteta druzhby narodov. Ser. Matematika, informatika, fizika. 2014. № 2. S. 142–149 (in Russian).
  16. Potapov A. Sistemy komp'juternogo zrenija: sovremennye zadachi i metody. Control Engineering. Rossija. 2014. № 1. S. 20–26 (in Russian).
  17. Forsajt D., Pons Zh. Komp'juternoe zrenie. Sovremennyj podhod. Per. s angl. M.: «Vil'jams». 2004. 926 s. (in Russian).
  18. Helmer S., Meger D., Viswanathan P., McCann S., Dockrey M., Fazli P., Southey T., Muja M., Joya M., Little J., Lowe D., Mackworth A. Semantic Robot Vision Challenge: Current State and Future Directions. IJCAI-09 Workshop on Competitions in Artificial Intelligence and Robotics. Pasadena, California, USA. July 11–13 2009. 7 p.
  19. Batiouaa I., Benouinia R., Zenkouara K., Zahia A. Image classification using separable invariants moments based on Racah polynomials. Procedia Computer Science Volume. 2018. V. 127. P. 320–327.
  20. Potapov A.S. Raspoznavanie obrazov i mashinnoe vosprijatie: obshhij podhod na osnove principa minimal'noj dliny opisanija. SPb: Politehnika. 2007. 548 s. (in Russian).
  21. Beaty R.E., Benedek M., Wilkins R.W., Jauk E., Fink A., Silvia P.J., Hodges D.A., Koschutnig K., Neubauer A.C. Creativity and the default network: A functional connectivity analysis of the creative brain at rest. (https://www.ncbi.nlm.nih.gov/pubmed/25245940) (англ.). Neuropsychologia. 2014. V. 64. P. 92-98.
  22. Bloh Je.L., Zjablov V.V. Obobshhennye kaskadnye kody. M.: Svjaz'. 1976. 240 s. (in Russian).
  23. Vlasov A.I., Larionov I.T., Orehov A.N., Tetik L.V. Sistema avtomaticheskogo raspoznavanija jemocional'nogo sostojanija cheloveka. Nejrokomp'jutery: razrabotka, primenenie. 2021. T. 23. № 5. S. 33−50. DOI: https://doi.org/10.18127/j19998554202105-03 (in Russian).
Date of receipt: 12.09.2022
Approved after review: 19.09.2022
Accepted for publication: 10.10.2022