350 rub
Journal Radioengineering №6 for 2023 г.
Article in number:
Reconstruction of a video sequence based on a geometric model using a video descriptor
Type of article: scientific article
DOI: https://doi.org/10.18127/j00338486-202306-20
UDC: 004.932
Authors:

V.P. Fedosov1, R.R. Ibadov2, S.R. Ibadov3

1-3 Institute for Radiotechnical Systems and Control, Southern Federal University (Rostov-on-Don, Russia)

Abstract:

In recent years, the problem of reconstructing visual information has been a hot topic in machine vision and remote sensing using UAVs. The scope of the UAV is quite wide. They can monitor the traffic situation, both citywide and in remote areas, control the fire situation in forests or flood waters in the regions, deliver goods in a short time, and much more. Many different intelligent and widely used subsurface reconstruction methods are used for accident-free driving of UAVs in an urban environment with visual guidance. The main visualization problem in the practical applications described above are shadows from buildings, overlapping objects, sun glare, reflections from the surface, which require high-quality image reconstruction. The problem of classifying a dataset of the underlying surface is of great importance for the correct reconstruction of images.

The article proposes an algorithm for reconstructing a video sequence based on a geometric model using a video descriptor to classify video into a static background and moving objects. To study the effectiveness of the new method, a qualitative analysis of the restored underlying surface of the urban environment was carried out. The subject of the study is the existing methods and algorithms for constructing descriptors for image classification, as well as methods for reconstructing dynamic images. The object of the study is a set of test video sequences of the terrain map obtained using a UAV. The result of the study is the development of an algorithm for constructing a global video descriptor for object classification and further reconstruction of the underlying surface based on the descriptor. The novelty of the work is an algorithm that allows you to reconstruct a map of the underlying surface based on the construction of an object classification descriptor. The results of calculating the root-mean-square error for evaluating the proposed method for processing the considered map of the area are shown.

When conducting a comparative analysis, it was revealed that almost all methods for recovering a video sequence have disadvantages, which are complicated by a number of reasons: the presence of a non-stationary background: objects located at different distances from the UAV camera can be moving; the difficulty of distinguishing between objects and the background in the case of slow movement of foreground objects; lighting conditions, etc. Therefore, this article proposes a method that allows you to overcome these difficulties and more accurately restore the map of the area.

Pages: 151-162
For citation

Fedosov V.P., Ibadov R.R., Ibadov S.R. Reconstruction of a video sequence based on a geometric model using a video descriptor. Radiotekhnika. 2023. V. 87. № 2. P. 151−162. DOI: https://doi.org/10.18127/j00338486-202306-20 (In Russian)

References
  1. Zhang H.B., Zhang Y.X., Zhong B., Lei Q., Yang L., Du J.X., Chen D.S. A comprehensive survey of vision-based human action recognition methods. Sensors. 2019. V. 19. № 5. P. 1005.
  2. Mishra O., Kavimandan P.S., Tripathi M.M., Kapoor R., Yadav K. Human Action Recognition Using a New Hybrid Descriptor. Advances in VLSI, Communication, and Signal Processing. Springer. Singapore. 2021. P. 527-536.
  3. Martin P.E., Benois-Pineau J., Péteri R. Fine-grained action detection and classification in table tennis with siamese spatio-temporal convolutional neural network. IEEE International Conference on Image Processing (ICIP). 2019. P. 3027-3028.
  4. Yang Y., Ren H., Li C., Ding C., Yu H. An edge-device based fast fall detection using spatio-temporal optical flow model. 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2021. P. 5067-5071.
  5. Wang X., Qi C. Saliency-based dense trajectories for action recognition using low-rank matrix decomposition. Journal of Visual Communication and Image Representation. 2016. V. 41. P. 361-374.
  6. Xue F., Ji H., Zhang W., Cao Y. Action Recognition Based on Dense Trajectories and Human Detection. IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). 2018. P. 340-343.
  7. Yenduri S., Chalavadi V., Mohan C.K. STIP-GCN: Space-time interest points graph convolutional network for action recognition. IEEE International Joint Conference on Neural Networks (IJCNN). 2022. P. 1-8.
  8. Karpagavalli S., Balamurugan V., Kumar S.R. A novel hybrid keypoint detection algorithm for gradual shot boundary detection. IEEE International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). 2020. P. 1-5.
  9. Liu A.A., Su Y.T., Nie W.Z., Kankanhalli M. Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE transactions on pattern analysis and machine intelligence. 2016. V. 39. № 1. P. 102-114.
  10. Li W., Nie W., Su Y. Human action recognition based on selected spatio-temporal features via bidirectional LSTM. IEEE Access. 2018. V. 6. P. 44211-44220.
  11. Bhorge S.B., Manthalkar R.R. Histogram of directional derivative based spatio-temporal descriptor for human action recognition. IEEE International Conference on Data Management, Analytics and Innovation (ICDMAI). 2017. P. 42-46.
  12. Lin B., Fang B., Yang W., Qian J. Human action recognition based on spatio-temporal three-dimensional scattering transform descriptor and an improved VLAD feature encoding algorithm. Neurocomputing. 2019. V. 348. P. 145-157.
  13. Mironică I., Duţă I.C., Ionescu B., Sebe N. A modified vector of locally aggregated descriptors approach for fast video classification. Multimedia Tools and Applications. 2016. V. 75. № 15. P. 9045-9072.
  14. Idrees H., Zamir A. R., Jiang Y.G., Gorban A., Laptev I., Sukthankar R., Shah M. The THUMOS challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding. 2017. V. 155. P. 1-23.
  15. Huang R., Xu Y., Hong D., Yao W., Ghamisi P., Stilla U. Deep point embedding for urban classification using ALS point clouds: A new perspective from local to global. ISPRS Journal of Photogrammetry and Remote Sensing. 2020. V. 163. P. 62-81.
  16. Khan S.H., Hayat M., Porikli F. Scene categorization with spectral features. Proceedings of the IEEE international conference on computer vision. 2017. P. 5638-5648.
  17. Burlingham C.S., Heeger D.J. Heading perception depends on time-varying evolution of optic flow. Proceedings of the National Academy of Sciences. 2020. V. 117. № 52. P. 33161-33169.
  18. Ibadov R.R., Fedosov V.P., Ibadov S.R. The method of spatial-temporal reconstruction of dynamic images based on a geometric model with contour and texture analysis. IOP Conference Series: Materials Science and Engineering. IOP Publishing. 2021.
    V. 1029. № 1. P. 012093.
  19. Ibadov R.R., Fedosov V.P., Ibadov S.R., Kucheryavenko S.V. Recovering lost areas of the underlying image surface using a method based on similar blocks. AIP Conference Proceedings. AIP Publishing LLC. 2019. V. 2188. № 1. P. 050001.
  20. Fedosov V.P., Ibadov R.R., Ibadov S.R., Voronin V.V. Restoration of the Blind Zone of the Image of the Underlying Surface for Radar Systems with Doppler Beam Sharpening. IEEE Radiation and Scattering of Electromagnetic Waves (RSEMW). 2019. P. 424-427.
  21. Malyshev V.A., Mashkov V.G. Skorost' rasprostranenija jelektromagnitnoj volny v snezhno-ledjanoj podstilajushhej poverhnosti. Radiotehnika. 2020. T. 84. № 3. S. 29-39. DOI: 10.18127/j00338486-202003(05)-05 (in Russian).
  22. Borzov A.B., Lihoedenko K.P., Karakulin Ju.V., Suchkov V.B. Matematicheskoe modelirovanie vhodnyh signalov bortovyh sistem blizhnej radiolokacii ot podstilajushhih poverhnostej na osnove ih mnogotochechnyh modelej. Uspehi sovremennoj radiojelektroniki. 2017. № 4. S. 48-57 (in Russian).
Date of receipt: 21.12.2022
Approved after review: 10.01.2023
Accepted for publication: 28.04.2023