350 rub
Journal Highly available systems №3 for 2023 г.
Article in number:
The polygonal surface reconstruction method during dynamic three-dimensional reconstruction
Type of article: scientific article
DOI: https://doi.org/10.18127/j20729472-202303-05
UDC: 004.92
Authors:

Yu.A. Maniakov1, P. O. Arkhipov2, P. L. Stavtsev3

1–3 Orel Branch of FRC CSC RAS (Orel, Russia)
1 maniakov_yuri@mail.ru; 2 arpaul@mail.ru; 3 pavelstavcev@gmail.com

Abstract:

Three-dimensional reconstruction is becoming an increasingly popular technology in the field of computer vision. This is due to the growing need for creating detailed three-dimensional models. Such models are used, for example, in special effects in movies, in computer games, in robotics, geodesy, and architecture when manual model creation becomes either impossible or an extremely labor-intensive process.

Real-time three-dimensional reconstruction is actively developing and is in high demand in robotics and SLAM (Simultaneous Localization and Mapping) technology. This field requires the development of efficient and high-performance algorithms to solve a wide range of tasks.

The use of voxel-based representation for three-dimensional reconstruction is quite widespread, primarily because of its simplicity of representation, noise resistance, and ease of merging different parts and models into a single scene. However, from a visual representation and memory usage perspective, polygonal construction is more suitable.

To build a polygonal surface, there are direct and indirect methods. Direct methods involve constructing the polygonal mesh directly within the specified boundary, while indirect methods often use a pre-built tetrahedral mesh with subsequent conversion of tetrahedral cells into hexahedral ones or the transformation of another original discrete representation of the area.

Among direct methods, the most well-known are the marching cubes algorithm and ray casting. The marching cubes algorithm does not accurately represent sharp angles of the model and is less suitable for linear surfaces, and it requires a significant amount of memory.

Among indirect methods, the Epstein algorithm and N-Morph are notable. Applying indirect methods to construct a polygonal model from a set of points extracted from a voxel model is challenging due to its sparsity. The resulting polygonal surface contains many uncovered areas and only slightly improves the visual perception of the scene compared to a set of points. This is because indirect methods, when deciding to add a face, operate only on local geometry and cannot assess the necessity of a particular face relative to the voxel volume. Furthermore, indirect methods have high computational complexity, limiting their application in real-time conditions.

To address these issues, a direct surface reconstruction method in the form of a polygonal model is proposed. This method is intended for dynamic reconstruction when data from optical sensors arrive during the reconstruction process. It involves the operation of an incremental three-dimensional reconstruction algorithm, updating the polygonal model after integrating the next frame into the voxel volume.

The method takes into account the specificity of indoor reconstruction, which includes large flat regions representing both structural elements (floors, walls, doors) and interior objects (cabinets, tables). It is proposed to analyze such reconstruction areas separately, which will reduce the computational complexity of the surface reconstruction procedure, as triangulating a flat fragment is a standard computational geometry task with many efficient solutions. Therefore, the developed method consists of several stages: identifying large flat fragments in the scene, triangulating flat fragments, and triangulating the remaining fragments of the model.

Pages: 57-64
For citation

Maniakov Yu.A., Arkhipov P.O., Stavtsev P.L. The polygonal surface reconstruction method during dynamic three-dimensional reconstruction. Highly Available Systems. 2023. V. 19. № 3. P. 57−64. DOI: https://doi.org/ 10.18127/j20729472-202303-05 (in Russian)

References
  1. Lorensen W., Cline H. Marching Cubes: a high resolution 3D surface construction algorithm. Computer Graphics (SIGGRAPH 87 Proceedings). 1987. P. 163–169.
  2. Newcombe R.A., Izadi S., Hilliges O., Molyneaux D., Kim D., Davison A. J., Kohli P., Shotton J, Hodges S. Fitzgibbon A. KinectFusion: Real-time dense surface mapping and tracking. 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR '11) Proceedings. Washington, DC, USA: IEEE, 2011. P. 127–136.
  3. Berger M., Tagliasacchi A., Seversky L., Alliez P., Levine J., Sharf A., Silva C. State of the Art in Surface Reconstruction from Point Clouds. Eurographics 2014-State of the Art Reports. 2014. P. 161–185.
  4. Eppstein D. Linear complexity hexahedral mesh generation. In Symposium on Computational Geometry. 1996. P. 58–67.
  5. Owen S.J., Saigal S. H-Morph: An Indirect Approach to Advancing Front Hex Meshing. International Journal for Numerical Methods in Engineering. 2000. P. 289–312.
  6. Schnabel R, Wahl R., Klein R. Efficient RANSAC for point-cloud shape detection. Computer Graphics Forum, 2011. V. 26. P. 214–226.
  7. Xu B., Jiang W., Shan J., Zhang J., Li L. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds. Remote Sensing, 2016. V. 8. № 1. P. 5
  8. Yakovlev O.A., Gasilov A.V. Sozdanie realistichnyh naborov dannyh dlya algoritmov trekhmernoj rekonstrukcii s pomoshch'yu virtual'noj s"emki komp'yuternoj modeli. Sistemy i sredstva informatiki. 2016. T. 26. № 2. S. 98–107 (in Russian).
  9. Svidetel'stvo № 2019663718 (RF). Programmnoe obespechenie sistemy obsledovaniya pomeshchenij i trekhmernoj rekonstrukcii pomeshchenij s pomoshch'yu avtonomnogo mobil'nogo robota (RT-Rec): svidetel'stvo o gosudarstvennoj registracii programmy dlya EVM. O.P. Arhipov, O.A. YAkovlev, A.I. Sorokin, Yu.A. Man'yakov, P.Yu. Butyrin. 2019 (in Russian).
  10. Berg M. de, Cheong O., Kreveld M. van, Overmars M. Computational Geometry: Algorithms and Applications. Santa Clara, CA, USA: Springer-Verlag TELOS, 2008. 386 p.
  11. Stanford 3D Indoor Scene Dataset (S3DIS). URL: http://buildingparser.stanford.edu/dataset.html
Date of receipt: 17.08.2023
Approved after review: 28.08.2023
Accepted for publication: 30.08.2023