350 rub
Journal Highly available systems №4 for 2022 г.
Article in number:
The method of integration three-dimensional models into a scene during three-dimensional reconstruction
Type of article: scientific article
DOI: https://doi.org/10.18127/j20729472-202204-02
UDC: 004.92
Authors:

Yu.A. Maniakov1, P.O. Arkhipov2, P.L. Stavtsev3

1–3 Orel Branch of Federal Research Center «Computer Science and Control» of the RAS (Orel, Russia)
 

Abstract:

Three-dimensional reconstruction is one of the most popular technologies in computer vision and its popularity innreasing. Due to the growing need to obtain a large number of detailed 3D models, for example, special effects in movies, computer games, robotics, geodesy and architecture, when their manual creation is impossible or very difficult.

Three-dimensional reconstruction in real time is an actively developing area, the most in demand in robotics and SLAM, which requires solving a fairly large range of tasks, in particular, the development of efficient and productive algorithms.

Due to the fundamental limitations of three-dimensional reconstruction methods, the resulting models may contain errors of different classes, such as noise and visual incompleteness. The appearance of noise in the reconstruction is inevitable and is associated both with the source of input data (image sampling, digital camera noise) and directly with the reconstruction algorithm (stereo matching errors, localization errors).

The incompleteness of the reconstruction may be caused, for example, by the inability to shoot a certain area due to mutual overlap of objects, insufficient lighting, as well as the presence of uniformly colored surfaces.

Also an important problem for real-time reconstruction is the need to optimize algorithms and reduce their computational complexity in order to increase speed.

There is quite a lot of research aimed at solving these problems. In particular, the problems of noise reduction during reconstruction are considered both at the stage of image acquisition and at the stage of processing a point cloud in three-dimensional space.

The problem of increasing the computational efficiency of reconstruction algorithms is also very relevant. Studies show that the solution to this problem is possible at different stages of reconstruction: in the process of stereo matching, during the construction of a voxel model.

The aim of following research is to solve the problems of reducing the level of noise and incompleteness of the geometry of the results of three-dimensional reconstruction, as well as increasing computational efficiency and the reconstruction speed.

As the results of research presented the method, which implies the usage of a database of reference three-dimensional models by integrating them into a three-dimensional scene obtained as a result of reconstruction.

The method of integrating three-dimensional models into the scene implies the correction of three-dimensional reconstruction based on a database of three-dimensional models. This process is carried out in two stages: localization of objects in reconstruction and direct integration of the object.

This approach is based on the fact that the interior of the room is formed mainly from typical objects. Despite the fact that there is a significant morphological diversity among objects of the same class, they often have a common "skeleton.

The presence of a common "skeleton" of objects of the same class allows you to perform object analysis using a relatively small collection of standards.

In addition to correcting the reconstruction, this approach provides memory savings during storage, since instead of a fragment of the scene representing some object, it is enough to store the object identifier and its coordinates. Also, the presence of object markup simplifies the construction of formal models of the room, for example, an architectural plan.

Pages: 16-27
For citation

Maniakov Yu.A., Arkhipov P.O., Stavtsev P.L. The method of integration three-dimensional models into a scene during three-dimensional reconstruction. Highly Available Systems. 2022. V. 18. № 4. P. 16−27. DOI: https://doi.org/ 10.18127/j20729472-202204-02 (in Russian)

References
  1. Nu-lee Song, Jin-Ho Park, Gye-Young Kim Robust. 3D Reconstruction Through Noise Reduction of Ultra-Fast Images. Advances in Computer Science and Ubiquitous Computing, Jan. 2021. P. 509–514.
  2. Katja Wolff, Changil Kim, Henning Zimmer, Christopher Schroers, Mario Botsch, Olga Sorkine-Hornung, Alexander Sorkine-Hornung. Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction. 2016 Fourth International Conference on 3D Vision (3DV). 2016. P. 118–127, DOI: 10.1109/3DV.2016.20.
  3. Yao Duan, Chuanchuan Yang, Hao Chen, Weizhen Yan, Hongbin Li. Low-complexity point cloud denoising for LiDAR by PCA-based dimension reduction. Optics Communications. 2021. V. 82.
  4. Qingxiong Yang, Liang Wang, Ruigang Yang, Shengnan Wang, Miao Liao, David Nistér. Real-time Global Stereo Matching Using Hierarchical Belief Propagation. 17th British Machine Vision Conference (BMVC), 2006. P. 989-998. DOI: 10.5244/C.20.101.
  5. Zhang Yu., Garcia S., Xu W., Shao T., Yang Y. Efficient voxelization using projected optimal scanline. Graphical Models. 2018. V. 100. P. 61–70. ISSN 1524-0703.
  6. MeshLab – URL: https://www.meshlab.net
  7. Batenkov A.A., Man'yakov Yu.A., Gasilov A.V., YAkovlev O.A. Matematicheskaya model' optimal'noj triangulyacii. Informatika i ee primeneniya. 2018. T. 12. № 2. S. 69–74 (in Russian).
  8. Shamos M.I., Hoey D. Geometric intersection problems. 17th Annual Symposium on Foundations of Computer Science proceedings. Houston, TX, USA. 1976. P. 208–215.
  9. Dai A.A., Chang A.X., Savva M., Halber M., Funkhouser T., Nießner M. ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) proceedings. Honolulu, HI, USA. 2017. P. 2432–2443.
  10. Hua B., Pham Q., Nguyen D. T., Tran M., Yu L., Yeung S. SceneNN: A Scene Meshes Dataset with aNNotations. 2016 Fourth International Conference on 3D Vision (3DV) proceedings. Stanford. CA. USA. 2016. P. 92–101.
  11. Armeni I., Sener O., Zamir A.R., Jiang H., Brilakis I., Fischer M., Savarese S. 3D Semantic Parsing of Large-Scale Indoor Spaces. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) proceedings. Las Vegas, NV, USA. 2016. P. 1534–1543.
  12. Yakovlev O.A., Gasilov A.V. Sozdanie realistichnyh naborov dannyh dlya algoritmov trekhmernoj rekonstrukcii s pomoshch'yu virtual'noj s"emki komp'yuternoj modeli. Sistemy i sredstva informatiki. 2016. T. 26. № 2. S. 98–107 (in Russian).
  13. А.с. № 2019663718 (РФ). Программное обеспечение системы обследования помещений и трехмерной реконструкции помещений с помощью автономного мобильного робота (RT-Rec): свидетельство о государственной регистрации программы для ЭВМ / О.П. Архипов, О.А. Яковлев, А.И. Сорокин, Ю.А. Маньяков, П.Ю. Бутырин; заявитель и правообладатель ФИЦ ИУ РАН. 2019 (in Russian).
  14. Goodfellow I., Bengio Y., Courville A. Deep Learning. MIT Press, 2016, 800 p.
  15. Boykov Y., Veksler O., Zabih R. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2011. Vol. 23. Iss. 11. pp. 1222–1239.
  16. Boykov Y., Kolmogorov V. An experimental comparison of Min-Cut/Max-Flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2004. Vol. 26. Iss. 9. pp. 1124–1137.
  17. Chang A.X., Funkhouser T., Guibas L., Hanrahan P., Qixing Huang, Zimo Li, Savarese S., Savva M., Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. ArXiv, abs/1512.03012 URL: https://arxiv.org/abs/1512.03012v1
  18. Nistratov A.A. Analytical prediction of the integral risk of violation of the acceptable performance of the set of standard processes in a life cycle of highly available systems. Part 1. Mathematical models and methods. Highly Available Systems. 2021. V. 17. № 3. P. 16−31. DOI: https://doi.org/10.18127/j20729472-202103-02 (in Russian).
Date of receipt: 05.10.2022
Approved after review: 19.10.2022
Accepted for publication: 21.11.2022