350 rub
Journal Neurocomputers №3 for 2017 г.
Article in number:
Logic gate neural network model and its learning algorithm
Authors:
T.E. Mikhailyuk - Post-graduate Student, Deptartment of Electronics and Biomedical Technology, Ufa State Aviation Technical University (UGATU)
E-mail: realotoim@mail.ru
S.V. Zhernakov - Dr.Sc. (Eng.), Professor, Head of Deptment of Electronics and Biomedical Technology, Ufa State Aviation Technical University
E-mail: zhsviit@mail.ru
Abstract:
Artificial intelligence algorithms require complex use of hardware and software. The approach based on modeling of artificial neural networks is versatile and flexible, but has limitations related to the field of their application.
Existing mathematical models of a neuron operate with continuous quantities, are realized on the basis of an analog elements, which leads to their poor compatibility with digital equipment.
In this paper a method for constructing a neural-like architecture based on discrete trainable structures is proposed to improve the compatibility of artificial neural network models in the digital basis of programmable logic devices and general-purpose processors.
The trainable gate network is representative of Boolean networks with the ability to specify the type of mapping of the vector of input signals to the output vector, using the learning algorithm. The mathematical model of a Boolean (gate) trainable network in principal disjunctive normal form is obtained. In such model there are no operators inherent to neural networks, since they are bit-oriented. Weights are Boolean variables there, and not real numbers.
With the help of De Morgan\'s laws, the principal conjunctive normal form of the model is obtained.
Applying the Widrow-Hoff rule to the received network model in the PDNF, a learning rule for the Boolean network is obtained.
The developed network can be considered as a basis for constructing feedforward neural networks with a flexible to-pology that can be adapted to a specific task, up to the level of logical elements.
The work in the field of creating discrete learning networks is aimed to solve the problems of optimizing hardware and software costs in the construction of neural networks and digital equipment in general. The trainable gate network is not in-tended to replace a feedforward neural network, but it can be considered as a basis for constructing any digital network.
Pages: 27-33
References
- Aljautdinov M.A., Galushkin A.I., Kazancev P.A., Ostapenko G.P. Nejjrokompjutery: ot programmnojj k apparatnojj realizacii. M.: Gorjachaja linija - Telekom. 2008. 152 s.
- Mezenceva O.S., Mezencev D.V., Lagunov N.A., Savchenko N.S. Realizacija nestandartnykh modelejj nejjronov na vektornom processore Neuromatrix // Izv. JUFU. Tekhnicheskie nauki. 2012. T. 131. № 6. S. 178-182.
- Adetiba E., Ibikunle F.A., Daramola S.A., Olajide A.T. Implementation of Efficient Multilayer Perceptron ANN Neurons on Field Programmable Gate Array Chip // International Journal of Engineering & Technology. 2014. V. 14. № 1. P. 151-159.
- Manchev O., Donchev B., Pavlitov K. FPGA implementation of artificial neurons // Electronics: An Open Access Journal (Sozopol, Bulgaria, Sept. 22-24 2004) [Online]. Available: https://www.researchgate.net/publication/251757109_ FPGA_IMPLEMENTATION_OF_ARTIFICIAL_NEURONS (accessed: 28.01.2017).
- Kohut R., Steinbach B. The Structure of Boolean Neuron for the Optimal Mapping to FPGAs [Online]. Available:http://www.informatik.tu-freiberg.de/prof2/publikationen/ CADSM2005_BN_FPGA.pdf (accessed: 1.02.2017)
- Korani R., Hajera H., Imthiazunnisa B., Chandra Sekhar R. FPGA modelling of neuron for future artificial intelligence applications // International Journal of Advanced Research in Computer and Communication Engineering. 2013. V. 2. № 12. P. 4763-4768.
- Omondi A. R., Rajapakse J. C. FPGA Implementations of Neural Networks // Springer. 2006 [Online]. Available:http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/Omondi 2006.pdf (accessed: 28.01.2017).
- Gribachev V. EHlementnaja baza apparatnykh realizacijj nejjronnykh setejj [EHlektronnyjj resurs] // Komponenty i tekhnologii: sajjt. Rezhim dostupa: http://kit-e.ru/articles/elcomp/2006_8_100.php(data obrashhenija: 30.06.2016).
- Mikhajjljuk T.E., ZHernakov S.V. Povyshenie ehffektivnosti ispolzovanija resursov mikroskhemy PPVM pri realizacii nejjronnykh setejj // Nejjrokompjutery: razrabotka, primenenie. 2016. № 11. S. 30-39.
- Mikhajjljuk T.E., ZHernakov S.V. Ob odnom podkhode k vyboru optimalnojj arkhitektury PLIS v nejjrosetevom logicheskom bazise // Informacionnye tekhnologii. 2017. T. 23. № 3. C. 233-240.
- Kohut R., Steinbach B. Decomposition of Boolean Function Sets for Boolean Neural Networks [Online]. Available: https://www.researchgate.net/publication/228865096_Decomposition_of_Boolean_Function_Sets_for_Boolean_Neural_ Networks (accessed: 1.02.2017)
- Anthony M. Boolean Functions and Artificial Neural Networks [Online]. Available: http://www.cdam.lse.ac.uk/ Reports/Files/cdam-2003-01.pdf (accessed: 29.01.2017)
- Kohut R., Steinbach B. Boolean Neural Networks // WSEAS Transactions on Systems. 2004. Vol. 3, no. 2. P. 420-425.
- Steinbach B., Kohut R. Neural Networks - A Model of Boolean Functions [Online]. Available: https://www.researchgate.net/publication/246931125_Neural_Networks_-_A_Model_of_Boolean_Functions(accessed: 1.02.2017)
- Vinay D. Mapping Boolean Functions with Neural Networks having Binary Weights and Zero Thresholds // IEEE Transactions on Neural Networks. 2001. V. 12. № 3. P. 639-642.
- Zhang C., Yang J., Wu W. Binary Higher Order Neural Networks for Realizing Boolean Functions. // IEEE Transactions on Neural Networks. 2011. V. 22. № 5. P. 701-713.
- Rademacher H. Einige Sätze über Reihen von allgemeinen Orthogonalfunktionen // Math. Ann. 1922. V. 87. № 1-2. P. 112-138.
- KHajjkin S. Nejjronnye seti: Polnyjj kurs. M.: Viljams. 2008. 1104 s.
- Osovskijj S. Nejjronnye seti dlja obrabotki informacii: Per. s polsk. I.D. Rudinskogo. M.: Finansy i statistika. 2002. 344 s.
- Shin Y., Ghosh J. Efficient Higher-order Neural Networks for Classification and Function Approximation // The University of Texas at Austin. 1995 [Online]. Available: https://www.researchgate.net/publication/2793545_Efficient_Higher-order_Neural_Networks_for_Classification_and_ Function_Approximation (accessed: 28.01.2017).
- SHevelev JU. P. Diskretnaja matematika. CH. 1: Teorija mnozhestv. Buleva algebra (Avtomatizirovannaja tekhnologija obuchenija «Simvol»): Uch. posobie. Tomsk: TUSUR. 2003. 118 s.
- Omondi A., Premkumar B. Residue Number Systems: Theory and Implementation // London, Imperial College Press. 2007. 312 p.