350 rub
Journal Information-measuring and Control Systems №2 for 2020 г.
Article in number:
Improving the efficiency of education in a technical university
DOI: 10.18127/j20700814-202002-08
UDC: 004.42
Authors:

T.A. Onufrieva – Ph.D.(Eng.), Associate Professor, 

«Information Systems and Networks» Department, Kaluga Branch of the Bauman MSTU

E-mail: onufrievata@mail.ru

A.S. Sukhova – Student, 

«Information Systems and Networks» Department, Kaluga Branch of the Bauman MSTU

Е-mail: nastya_s@kaluga.ru

Abstract:

The article is concerned with the problem of the need for quick training in conditions of a high rate of information receipt. To increase the efficiency of students mastering technical universities of a large flow of information, it is proposed to develop a self-configuring educational system with a digital study guide. An artificial neural network was chosen as a method of implementing this system. The article discusses such stages of the neural network statement of the problem as the description of the initial data, the determination of the input signal of the neural network, the choice of a specific network structure and the formation of the network setup algorithm for solving the problem.

The development of a neural network begins with the step of describing the source data. At this stage, all the important features characterizing the object in the future model are taken into account. These features should have properties such as representativeness and consistency. The basis of information for the input data of the neural network is the results of testing on the topic studied and the parameters of the user session. The step of determining the input signal of the neural network involves the preprocessing of the source data. Data preprocessing consists in converting all values to a numerical type, as well as in normalizing them. The next step is to determine the topology of the neural network. The choice of network topology depends on the features of the process interpreted by the neural network. In this article, the Jordan multilayer recurrent neural network was selected. This network is capable of learning without a teacher and remembering its previous state using the feedback layer. In accordance with the selected network topology, the settings of the parameters of its elements are determined: the value of the forgetting factor, the number of neurons and the activation function for each of the layers. For training the neural network, genetic learning algorithms are used, which, unlike the gradient descent method and the back propagation of errors, do not require the calculation of complex mathematical expressions for their work. The result of the work of the neural network is the conclusion about the level of mastering by the student of a certain topic.

To develop a neural network, we used the Microsoft Visual Studio Community 2019 integrated software development environment and the C # programming language. Training and testing of the developed neural network was carried out on the example of the discipline "Fundamentals of microcontroller programming." Evaluation of the quality of the network compared to the results of conventional testing was carried out on the basis of the values of parameters such as the probability of occurrence of errors of the 1st and 2nd kind. In accordance with the assessment of the operability of the developed neural network for a self-tuning training system, we can conclude that at this stage its structure and operation algorithm are chosen correctly. The next stage in the development of the system will be to work together with the teacher and test the system.

Pages: 51-60
References
  1. Sukhova A.S., Chubarev M.V., Onufrieva T.A. Razrabotka rabochego mesta otladki na baze mikrokontrollera STM32. Tekhnologicheskie innivatsii v sovremennom mire. 2019. Ch. 1. S. 146−152 (in Russian).
  2. Kuznetsov G.I. Pedagogika: Ucheb. posobie. SPb: Nestor-Istoriya. 2014. 176 s. (in Russian).
  3. Bespal′ko V.P. Obrazovanie i obuchenie s uchastiem komp′yuterov (pedagogika tret′ego tyusyacheletiya): Ucheb.-metod. posobie.  M.: Izd-vo Moskovskogo psikhologo-sotsial′nogo instituta. 2002. 352 s. (in Russian).
  4. Komartsova L.G., Maksimov A.V. Neirokomp′yutery: Ucheb. posobie dlya vuzov. M.: Izd-vo MGTU im. N.E. Baumana. 2004. 400 s.  (in Russian).
  5. Galushkin A.I. Neironnye seti: osnovy teorii. M.: Goryachaya liniya − Telekom. 2012. 496 s. (in Russian).
  6. Bryus P. Prakticheskaya statistika dlya spetsialistov Data Science. 50 vazneishikh ponyatii. Per. s angl. SPb: BKhV-Peterburg. 2018. 304 s. (in Russian).
  7. Kaziev V.M. Vvedenie v analiz, sintez i modelirovanie sistem. M.: Internet-Universitet Informatsionnykh Tekhnologii (INTUIT). 2016. 270 s. URL = http://www.iprbookshop.ru/52188.html (in Russian).
  8. Shvedov A.S. Teoriya veroyatnostei i matematicheskaya statistika: Ucheb. posobie dlya VUZov. M.: Izd. dom GU VShE. 2005. 254 s. (in Russian).
  9. Tim Dzhons M. Programmirovanie iskusstvennogo intellekta v prilozheniyakh. Per. s angl. A.I. Osipova. Saratov: Profobrazovanie. 2017. 310 s. URL = http://www.iprbookshop.ru/63950.html (in Russian).
  10. Sotnik S.L. Proektirovanie system iskusstvennogo intellekta. M.: Internet-Universitet Informatsionnykh Tekhnologii (INTUIT). 2016. 228 s. URL = http://www.iprbookshop.ru/73716.html (in Russian). 
  11. Jordan M.I. Serial order: A Parallel Distributed Processing Approach: Tech. Rep. ICS Report 8604. Institute for Cognitive Science. University of California. San Diego. 1986.
  12. Kleinberg Dzh., Tardos E. Algoritmy. Razrabotka i primenenie. Per. s angl. E. Matveeva. Pod red. N. Rimitsian. SPb: Piter. 2016. 800 s. (in Russian).
  13. Panchenko T.V. Geneticheskie algoritmy: Ucheb.-metodich. posobie. Astrakhan′ Астрахань: Izdatel′skii dom «Astrakhanskii universitet». 2007. 87 s. (in Russian).
Date of receipt: 7 февраля 2020 г.