350 rub
Journal Highly available systems №4 for 2023 г.
Article in number:
Analysis of approaches to ensuring the security of automated systems with artificial intelligence technologies for face recognition
Type of article: scientific article
DOI: https://doi.org/10.18127/j20729472-202304-02
UDC: 681.3
Authors:

V.I. Korolev1, P.A. Keyer2

1,2 Institute of Informatics Problems of FRC CSC RAS (Moscow, Russia)
1 Gubkin Russian State University of Oil and Gas (Moscow, Russia)
1 vkorolev@ipiran.ru, 2 pkeyer@ipiran.ru

Abstract:

The widespread use of artificial intelligence (AI) technologies in various fields of activity not only increase the efficiency of solving problems in the field of automation and informatization, but also at the same time generate new vulnerabilities in the information processing environment and information threats. Artificial intelligence technologies are trained agents of human actions and acquire the quality of a certain subjectivity of decision-making. These factors influence the idea of ​ ​ the security paradigm of automated information systems (AIS), which become carriers of artificial intelligence technologies.

The most widespread practical application is currently facial recognition technologies for identifying people. Two basic stages of the life cycle of products implementing face recognition technologies can be distinguished: the stage of training a multilayer neural network, which is a model of face recognition, and the stage of product operation in the AIS structure. Each of these steps requires a separate security review.

The purpose of the article is to study the problem of ensuring the safety of automated systems (AS) using face recognition technologies. At the same time, the object of consideration is the automated system at the life cycle stage, when the AI tools are in the normal state of the trained and are included in the technological processes of the functioning of the automated system.

As a result of the work, an approach to building a typical structural and functional model of an automated system designed to identify subjects by using face recognition technology was formed. Threats to safety of the automated system (AS) when using the face recognition product in machine learning mode and operation mode are considered, system factors of influence on safety related to the configuration of the protected object and the use of artificial intelligence technologies are determined. A methodological approach to the development of solutions when creating an information security system (SOIB) in the conditions of using artificial intelligence technologies is proposed.

Methodological recommendations when designing SOIB for AS using AI reflect the practical significance of the results.

Pages: 21-36
For citation

Korolev V.I., Keyer P.A. Analysis of approaches to ensuring the security of automated systems with artificial intelligence technologies for face recognition. Highly Available Systems. 2023. V. 19. № 4. P. 21−36. DOI: https://doi.org/ 10.18127/j20729472-202304-02 (in Russian)

References
  1. Budzko V.I., Belenkov V.G., Korolyov V.I., Mel'nikov D.A. Osobennosti obespecheniya informacionnoj bezopasnosti avtomatizirovannyh sistem, ispol'zuyushchih tekhnologii nejronnyh setej. Sistemy vysokoj dostupnosti. 2023. T. 19. № 3. S. 5−17. DOI: https:// doi.org/10.18127/j20729472-202303-01
  2. Kak rabotaet raspoznavanie lic i mozhno li obmanut' etu sistemu. RBK, trend Industriya 4.0 https://trends.rbc.ru/ trends/industry/ 6050ac809a794712e5ef39b7
  3. ISO/IEC/IEEE 29148:2011. SISTEMNAYA I PROGRAMMNAYA INZHENERIYA. PROCESSY ZHIZNENNOGO CIKLA.
  4. Sistemy raspoznavaniya lic. Sajt «Vidioglaz». Videonablyudenie i bezopasnost'. https://videoglaz.ru/blog/sistemy-raspoznavaniya-lic-kak-ustroeny-i-gde-primenyayutsya?ysclid=lo47tgbe49261358170
  5. GOSTR 59193- 2020. UPRAVLENIE KONFIGURACIEJ. Osnovnye polozheniya.
  6. GOST R ISO 9001-2015. NACIONAL'NYJ STANDART ROSSIJSKOJ FEDERACII. SISTEMY MENEDZHMENTA KACHESTVA. Trebovaniya.
  7. Korolyov V.I. Processnaya model' monitoringa i reagirovaniya na incidenty informacionnoj bezopasnosti. Informacionnaya bezopasnost': vchera, segodnya, zavtra. Sb. statej po materialam III Mezhdunar. nauchno-prakt. konf. Moskva, 23 aprelya 2020 g. M.: RGGU. 2020. S. 18–25.
  8. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).
  9. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
  10. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
  11. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P). 2016 IEEE European Symposium on. IEEE, 372–387.
  12. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
  13. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 39–57.
  14. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
  15. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition. 9185–9193.
  16. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In International Conference on Learning Representations. 1–10. arXiv:1312.6199 http://arxiv.org/abs/ 1312.6199.
  17. Nicholas Carlini and David Wagner. 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 3–14.
  18. Jamie Hayes and George Danezis. 2018. Learning Universal Adversarial Perturbations with Generative Models. In 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 43–49.
  19. Jonathan Uesato, Brendan O’Donoghue, Aaron van den Oord, and Pushmeet Kohli. 2018. Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666 (2018).
  20. Warren He, Bo Li, and Dawn Song. 2018. Decision boundary analysis of adversarial examples. (2018).
  21. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. Ead: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence.
  22. Anish Athalye and Ilya Sutskever. 2017. Synthesizing Robust Adversarial Examples. (jul 2017). arXiv:1707.07397 http://arxiv.org/abs/1707.07397.
  23. Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. arXiv preprint arXiv:1712.04248 (2017).
  24. Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, and Mani Srivastava. 2018. Genattack: Practical black-box attacks with gradient-free optimization. arXiv preprint arXiv:1805.11090 (2018).
  25. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases. Springer, 387–402.
  26. Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Ci-hang Xie, et al. 2018. Adversarial attacks and defences competition. In The NIPS’17 Competition: Building Intelligent Systems. Springer, 195–231.
  27. Chawin Sitawarin and David Wagner. 2019. On the Robustness of Deep K-Nearest Neighbors. arXiv preprint arXiv:1903.08333 (2019).
  28. Budzko V.I., Mel'nikov D.A., Belenkov V.G. Sposoby parirovaniya atak na avtomatizirovannye sistemy, ispol'zuyushchih specificheskie dlya nejronnyh setej uyazvimosti // Sistemy vysokoj dostupnosti. 2023. T. 19. № 4. S. 5–19. DOI: https://doi.org/ 10.18127/j20729472-202304-01
  29. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks // In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
  30. GOSTR 59276-2020. Sistemy iskusstvennogo intellekta. SPOSOBY OBESPECHENIYA DOVERIYA. Obshchie polozheniya. GOSTR 59898-2021. OCENKA KACHESTVA SISTEM ISKUSSTVENNOGO INTELLEKTA. Obshchie polozheniya.
Date of receipt: 27.10.2023
Approved after review: 08.11.2023
Accepted for publication: 20.11.2023