350 rub
Journal Neurocomputers №6 for 2024 г.
Article in number:
Errors in the operation of neural network systems and malicious machine learning in technical vision tasks
Type of article: scientific article
DOI: 10.18127/j19998554-202406-04
UDC: 004.9
Authors:

A.V. Ermolenko1, V.M. Polushkin2, M.V. Smirnov3

1,2 FSBI "46 Central Research Institute" of the Russian Ministry of Defense (Moscow, Russia)

3 Financial University under the Government of the Russian Federation (Moscow, Russia)

3 academy@fa.ru

Abstract:

For the development of socio-economic systems, it is necessary to constantly increase the level of digitalization, including using artificial intelligence technologies. One of the promising directions in this area is the introduction of neural network systems based on competitive and generative learning into modern intelligent image recognition systems. Adversarial machine learning in the field of attack development and protection against them is an urgent research topic for a wide range of scientists and specialists. To consider the practical features of algorithms for malicious attacks on neural network image recognition systems. The results of the analysis of adversarial and malicious machine learning in relation to aspects of the stability of computer vision systems are presented. Errors in the construction of neural network image recognition systems based on the same type of frameworks are considered. The analysis of methods of artificial interference in the work of neural network classifiers is carried out. An overview of malware attack algorithms at various stages of neural network operation is presented. It is emphasized that some types of attacks are carried out by means of a slight distortion of the frame, almost imperceptible to humans, but at the same time radically changing the result of neural network classification. The considered examples of natural errors in the operation of neural network algorithms of vision systems emphasize the need for a more in-depth study of their side effects at the stages of experimental product development, acceptance of finished samples, system integration and functioning of automated safety and security systems.

Pages: 23-30
For citation

Ermolenko A.V., Polushkin V.M., Smirnov M.V. Errors in the operation of neural network systems and malicious machine learning in technical vision tasks. Neurocomputers. 2024. V. 26. № 6. Р. 23-30. DOI: https://doi.org/10.18127/j19998554-202406-04 (In Russian)

References
  1. Kulik S.D. Pattern recognition algorithms and modeling of automated factual information retrieval systems. Neurocomputers. 2007. № 2-3. P. 67–82. (In Russian)
  2. Park H., Ryu G., Choi D. Partial Retraining Substitute Model for Query-Limited Black-Box Attacks. Applied Sciences. 2020. V. 10. № 20. P. 7168. DOI 10.3390/app10207168.
  3. Tang W., Li H. Robust Airport Surface Object Detection Based on Graph Neural Network. Applied Sciences. 2024. V. 14. № 9. P. 3555. DOI 10.3390/app14093555.
  4. Chen X., Si Y., Zhang Z., Yang W., Feng J. Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression. Sensors. 2024. V. 24. № 9. P. 2954. DOI 10.3390/s24092954.
  5. Zhu Y., Li Y., Duan Z. Adaptive Whitening and Feature Gradient Smoothing-Based Anti-Sample Attack Method for Modulated Signals in Frequency-Hopping Communication. Electronics. 2024. V. 13. № 9. P. 1784. DOI 10.3390/electronics13091784.
  6. Garaev R., Rasheed B., Khan A.M. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks. Algorithms. 2024. V. 17. № 4. P. 162. DOI 10.3390/a17040162.
  7. Malicious machine learning: how dangerous it is and how to protect yourself. [Electronic resource] – Access mode: https://www.tadviser.ru/index.php/Статья:Вредоносное_машинное_обучение_(Adversarial_Machine_Learning,_AML), date of reference 04.06.2024. (in Russian)
  8. Kurakin A., Goodfellow I.J., Bengio S. Adversarial examples in the physical world. [Electronic resource] – Access mode: https://ar-xiv.org/pdf/1607.02533, date of reference 04.06.2024.
  9. Evasion attacks on Machine Learning (or "Adversarial Examples"). [Electronic resource] – Access mode: https://towardsdata-science.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1, date of reference 04.06.2024.
  10. Goodfellow I.J., Shelns J., Szegedy C. Explaining and harnessing adversarial examples. [Electronic resource] – Access mode: https://arxiv.org/pdf/1412.6572, date of reference 04.06.2024.
Date of receipt: 02.09.2024
Approved after review: 26.09.2024
Accepted for publication: 26.11.2024