Method of geometry reconstruction from a set of RGB images using differentiable rendering and visual hull
- Autores: Lysykh A.I.1, Zhdanov D.D.1, Sorokin M.I.1
-
Afiliações:
- ITMO University
- Edição: Nº 3 (2025)
- Páginas: 40–53
- Seção: COMPUTER GRAFICS AND VISUALIZATION
- URL: https://vietnamjournal.ru/0132-3474/article/view/688119
- DOI: https://doi.org/10.31857/S0132347425030044
- EDN: https://elibrary.ru/GREBJA
- ID: 688119
Citar
Texto integral



Resumo
The use of differentiable rendering methods is an up-to-date solution to the problem of geometry reconstruction from a set of RGB images without using expensive equipment. The disadvantage of this class of methods is the possible distortions of the geometry that arise during optimization and high computational complexity. Modern differentiable rendering methods calculate and use two types of gradients: silhouette gradients and normal gradients. Most distortions arising in geometry optimization are caused by modifications of parameters associated with silhouette gradients. The paper considers the possibility of increasing the efficiency of geometry reconstruction methods based on the use of differentiable rendering by dividing the reconstruction process into two stages: initialization and optimization. The first stage of reconstruction involves the creation of a visual shell of the reconstructed object. This stage allows one to automate the process of selecting the original geometry and start the next stage under two conditions: the silhouettes of the object have already been reconstructed from all observation points and the topologies of the reconstructed and true objects are equivalent. The second stage comprises a geometry optimization cycle based on the fulfillment of the above conditions. This cycle consists of four steps: image rendering, loss function calculation, gradient calculation, and geometry optimization. Satisfying the condition of matching the contours of the original and reference geometry eliminates the need to use silhouette gradients. This solution significantly reduces the number of errors that occur during optimization, as well as reduces the computational complexity of the method by eliminating the calculation of the loss function, gradient calculation, and optimization of parameters associated with the silhouettes of objects. The testing and analysis of the results showed an increase in the accuracy of geometry reconstruction with a decrease in grid resolution and a decrease in the total running time of the method in comparison with similar methods, as well as an up to two-fold increase in the speed of optimization steps.
Palavras-chave
Texto integral

Sobre autores
A. Lysykh
ITMO University
Autor responsável pela correspondência
Email: lysykhai@ya.ru
ORCID ID: 0000-0002-2437-5275
Rússia, Kronverksky pr., 49, Saint Petersburg, 197101
D. Zhdanov
ITMO University
Email: ddzhdanov@mail.ru
ORCID ID: 0000-0001-7346-8155
Rússia, Kronverksky pr., 49, Saint Petersburg, 197101
M. Sorokin
ITMO University
Email: vergotten@gmail.com
ORCID ID: 0000-0001-9093-1690
Rússia, Kronverksky pr., 49, Saint Petersburg, 197101
Bibliografia
- Cardenas-Garcia J.F., Yao H.G., Zheng S. 3D reconstruction of objects using stereo imaging, Opt. Lasers Eng., 1995, vol. 22, no. 3, pp. 193–213. https://doi.org/10.1016/0143-8166(94)00046-d
- Mikhail E.M. Introduction to Modern Photogrammetry. New York: Wiley, 2001.
- He Ch., Shen Yi., Forbes A. Towards higher-dimensional structured light, Light: Sci. Appl., 2022, vol. 11, no. 1, p. 205. https://doi.org/10.1038/s41377-022-00897-3
- Collis R.T.H. Lidar, Appl. Opt., 1970, vol. 9, no. 8, pp. 1782–1788. https://doi.org/10.1364/ao.9.001782
- Zhou Z., Jin X., Liu L., Zhou F. Three-dimensional geometry reconstruction method from Multiview ISAR images utilizing deep learning, Remote Sens., 2023, vol. 15, no. 7, p. 1882. https://doi.org/10.3390/rs15071882
- Liu J.W.S., Shih W.-K., Lin K.-J., Bettati R., Chung J.-Y. Imprecise computations, Proc. IEEE, 1994, vol. 82, no. 1, pp. 83–94. https://doi.org/10.1109/5.259428
- Henderson P., Ferrari V. Learning to generate and reconstruct 3D meshes with only 2D supervision, arXiv Preprint, 2018. https://doi.org/10.48550/arXiv.1807.09259
- Kato H., Beker D., Morariu M., Ando T., Matsuoka T., Kehl W., Gaidon A. Differentiable rendering: A survey, arXiv Preprint, 2020. https://doi.org/10.48550/arXiv.2006.12057
- Periyasamy A.S., Behnke S. Towards 3D scene understanding using differentiable rendering, SN Computer Science, 2023, vol. 4, no. 3, p. 245. https://doi.org/10.1007/s42979-022-01663-3
- Nicolet B., Jacobson A., Jakob W. Large steps in inverse rendering of geometry, ACM Trans. Graphics, 2021, vol. 40, no. 6, p. 248. https://doi.org/10.1145/3478513.3480501
- Rojas R. The backpropagation algorithm, Neural Networks: A Systematic Introduction, Berlin: Springer, 1996, pp. 149–182. https://doi.org/10.1007/978-3-642-61068-4_7
- Kato H., Ushiku Yo., Harada T. Neural 3D mesh renderer, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, IEEE, 2018, pp. 3907–3916. https://doi.org/10.1109/cvpr.2018.00411
- Liu Sh., Chen W., Li T., Li H. Soft rasterizer: A differentiable renderer for image-based 3D reasoning, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, 2019, IEEE, 2019, pp. 7708–7717. https://doi.org/10.1109/iccv.2019.00780
- Chen W., Ling H., Gao, J., Smith E., Lehtinen J., Jacobson A., Fidler S. Learning to predict 3D objects with an interpolation-based differentiable renderer, Advances in Neural Information Processing Systems, Wallach H., Larochelle H., Beygelzimer A., d’Alche-Buc F., Fox E., Garnett R., Eds., Curran Associates, 2019, vol. 32. https://proceedings.neurips.cc/paper_files/paper/2019/file/f5ac21cd0ef1b88e9848571aeb53551a-Paper.pdf
- Petersen F., Bermano A.H., Deussen O., Cohen-Or D. Pix2Vex: Image-to-geometry reconstruction using a smooth differentiable renderer, arXiv Preprint, 2019. https://doi.org/10.48550/arXiv.1903.11149
- Yan X., Yang J., Yumer E., Guo Yi., Lee H. Perspective transformer nets: Learning single-view 3D object reconstruction without 3D supervision, Advances in Neural Information Processing Systems, Lee D., Sugiyama M., Luxburg U., Guyon I., Garnett R., Eds., Curran Associates, 2016, vol. 29. https://proceedings.neurips.cc/paper_files/paper/2016/file/e820a45f1dfc7b95282d10b6087e11c0-Paper.pdf
- Tulsiani Sh., Zhou T., Efros A.A., Malik J. Multi-view supervision for single-view reconstruction via differentiable ray consistency, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, IEEE, 2017, pp. 209–217. https://doi.org/10.1109/cvpr.2017.30
- Insafutdinov E., Dosovitskiy A. Unsupervised learning of shape and pose with differentiable point clouds, Advances in Neural Information Processing Systems, Bengio S., Wallach H., Larochelle H., Grauman K., Cesa-Bianchi N., Garnett R., Eds., Curran Associates, 2018, vol. 31. https://proceedings.neurips.cc/paper_files/paper/2018/file/4e8412ad48562e3c9934f45c3e144d48-Paper.pdf
- Liu Sh., Saito Sh., Chen W., Li H. Learning to infer implicit surfaces without 3D supervision, Advances in Neural Information Processing Systems, Wallach H., Larochelle H., Beygelzimer A., d’Alche-Buc F., Fox E., Garnett R., Eds., Curran Associates, 2019, vol. 32. https://proceedings.neurips.cc/paper_files/paper/2019/file/bdf3fd65c81469f9b74cedd497f2f9ce-Paper.pdf
- Niemeyer M., Mescheder L., Oechsle M., Geiger A. Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, 2020, IEEE, 2020, pp. 3501–3512. https://doi.org/10.1109/cvpr42600.2020.00356
- Mildenhall B., Srinivasan P.P., Tancik M., Barron J.T., Ramamoorthi R., Ng R. NeRF: Representing scenes as neural radiance fields for view synthesis, Commun. ACM, 2021, vol. 65, no. 1, pp. 99–106. https://doi.org/10.1145/3503250
- Yang J., Pavone M., Wang Yu. FreeNeRF: Improving few-shot neural rendering with free frequency regularization, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, 2023, IEEE, 2023, pp. 8254–8263. https://doi.org/10.1109/cvpr52729.2023.00798
- Kerbl B.B., Kopanas G., Leimkuehler T., Drettakis G. 3D Gaussian splatting for real-time radiance field rendering, ACM Trans. Graphics, 2023, vol. 42, no. 4, p. 139. https://doi.org/10.1145/3592433
- Guedon A., Lepetit V. Sugar: Surface-aligned Gaussian splatting for efficient 3D mesh reconstruction and high-quality mesh rendering, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, 2024, IEEE, 2024, pp. 5354–5363.
- Jiang Yu., Ji D., Han Zh., Zwicker M. SDFDiff: Differentiable rendering of signed distance fields for 3D shape optimization, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, 2020, IEEE, 2020, pp. 1251–1261. https://doi.org/10.1109/cvpr42600.2020.00133
- Lombardi S., Simon T., Saragih J., Schwartz G., Lehrmann A., Sheikh Ya. Neural volumes, ACM Trans. Graphics, 2019, vol. 38, no. 4, p. 65. https://doi.org/arXiv:1906.07751 https://doi.org/10.1145/3306346.3323020
- Yifan W., Serena F., Wu Sh., Oztireli C., Sorkine-Hornung O. Differentiable surface splatting for point-based geometry processing, ACM Trans. Graphics, 2019, vol. 38, no. 6, p. 230. https://doi.org/10.1145/3355089.3356513
- Godard C., Aodha O.M., Brostow G.J. Unsupervised monocular depth estimation with left-right consistency, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, IEEE, 2017, pp. 270–279. https://doi.org/10.1109/cvpr.2017.699
- Kajiya J.T. The rendering equation, Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, Evans D.C., Athay R.J., Eds., New York: Association for Computing Machinery, 1986, pp. 143–150. https://doi.org/10.1145/15922.15902
- Li T.-M., Aittala M., Durand F., Lehtinen J. Differentiable Monte Carlo ray tracing through edge sampling, ACM Trans. Graphics, 2018, vol. 37, no. 6, p. 222. https://doi.org/10.1145/3272127.3275109
- Zhang Ch., Wu L., Zheng Ch., Gkioulekas I., Ramamoorthi R., Zhao Sh. A differential theory of radiative transfer, ACM Trans. Graphics, 2019, vol. 38, no. 6, p. 227. https://doi.org/10.1145/3355089.3356522
- Zhang Z., Roussel N., Jakob W. Projective sampling for differentiable rendering of geometry, ACM Trans. Graphics, 2023, vol. 42, no. 6, p. 212. https://doi.org/10.1145/3618385
- Loubet G., Holzschuch N., Jakob W. Reparameterizing discontinuous integrands for differentiable rendering, ACM Trans. Graphics, 2019, vol. 38, no. 6, p. 228. https://doi.org/10.1145/3355089.3356510
- Xu P., Bangaru S., Li T.-M., Zhao Sh. Warpedarea reparameterization of differential path integrals, ACM Trans. Graphics, 2023, vol. 42, no. 6, p. 213. https://doi.org/10.1145/3618330
- Loper M.M., Black M.J. OpenDR: An approximate differentiable renderer, Computer Vision–ECCV 2014, Fleet D., Pajdla T., Schiele B., Tuytelaars T., Eds., Lecture Notes in Computer Science, vol. 8695, Cham: Springer, 2014, pp. 154–169. https://doi.org/10.1007/978-3-319-10584-0_11
- Laine S., Hellsten J., Karras T., Seol Ye., Lehtinen J., Aila T. Modular primitives for high-performance differentiable rendering, ACM Trans. Graphics, 2020, vol. 39, no. 6, p. 194. https://doi.org/10.1145/3414685.3417861
- Ravi N., Reizenstein J., Novotny D., Gordon T., Lo W.-Y., Johnson J., Gkioxari G. Accelerating 3D deep learning with PyTorch3D, arXiv Preprint, 2020. https://doi.org/10.48550/arXiv.2007.08501
- Gupta K., Chandraker M. Neural mesh flow: 3D manifold mesh generation via diffeomorphic flows, Advances in Neural Information Processing Systems, Larochelle H., Ranzato M., Hadsell R., Balcan M.F., Lin H., Eds., Curran Associates, 2020, vol. 33, pp. 1747–1758. https://proceedings.neurips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Paper.pdf
- Palfinger W. Continuous remeshing for inverse rendering, Computer Animation and Virtual Worlds, 2022, vol. 33, no. 5, p. e2101. https://doi.org/10.1002/cav.2101
- Hoppe H., Derose T., Duchamp T., Mcdonald J., Stuetzle W. Mesh optimization, Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, 1993, New York: Assocition for Computing Machinery, 1993, pp. 19–26. https://doi.org/10.1145/166117.166119
- Lysykh A.I., Zhdanov D.D., Sorokin M.I. Using a visual shell for geometry reconstruction from a set of RGB images via differentiable rendering, GRAPHICON 2024, Lyubchinov E.V., Panchuk K.L., Kaigorodtseva N.V., Eds., Omsk: Omskii Gosudarstvennyi Tekhnicheskii Universitet, 2024, pp. 238–249. https://doi.org/10.25206/978-5-8149-3873-2-2024-238-249
- Laurentini A. The visual hull concept for silhouettebased image understanding, IEEE Trans. Pattern Anal. Mach. Intell., 1994, vol. 16, no. 2, pp. 150–162. https://doi.org/10.1109/34.273735
- Ren T., Liu Sh., Zeng A., Lin J., Li K., Cao H., Chen J., Huang X., Chen Yu., Yan F., Zeng Zh., Zhang H., Li F., Yang J., Li H., Jiang Q., Zhang L. Grounded SAM: Assembling open-world models for diverse visual tasks, arXiv Preprint, 2024. https://doi.org/10.48550/arXiv.2401.14159
- Liu Sh., Zeng Zh., Ren T., Li F., Zhang H., Yang J., Jiang Q., Li Ch., Yang J., Su H., Zhu J., Zhang L. Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection, Computer Vision–ECCV 2024, Leonardis A., Ricci E., Roth S., Russakovsky O., Sattler T., Varol G., Eds., Lecture Notes in Computer Science, vol. 15105, Cham: Springer, 2025, pp. 38–55. https://doi.org/10.1007/978-3-031-72970-6_3
- Ren T., Jiang Q., Liu Sh., Zeng Zh., Liu W., Gao H., Huang H., Ma Zh., Jiang X., Chen Yi., Xiong Yu., Zhang H., Li F., Tang P., Yu K., Zhang L. Grounding DINO 1.5: Advance the edge of open-set object detection, arXiv Preprint, 2024. https://doi.org/10.48550/arXiv.2405.10300
- Ravi N., Gabeur V., Hu Y.-T., Hu R., Ryali Ch., Ma T., Khedr H., Rädle R., Rolland Ch., Gustafson L., Mintun E., Pan J., Alwala K.V., Carion N., Wu C.-Y., Girshick R., Dollár P., Feichtenhofer Ch. SAM 2: Segment anything in images and videos, arXiv Preprint, 2024. https://doi.org/10.48550/arXiv.2408.00714
- Visvalingam M., Whyatt J.D. Line generalization by repeated elimination of points, Landmarks in Mapping: 50 Years of the Cartographic Journal, Kent A., Ed., London: Routledge, 2017, pp. 144–155. https://doi.org/10.4324/9781351191234-14
- Ramer U. An iterative procedure for the polygonal approximation of plane curves, Computer Graphics and Image Processing, 1972, vol. 1, no. 3, pp. 244–256. https://doi.org/10.1016/s0146-664x(72)80017-0
- Douglas D.H., Peucker T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature, Cartographica, 1973, vol. 10, no. 2, pp. 112–122. https://doi.org/10.3138/fm57-6770-u75u-7727
- Paszke A., Gross S., Massa F., Lerer A., Bradbury J., Chanan G., Killeen T., Lin Z., Gimelshein N., Antiga L., Desmaison A., Kopf A., Yang E., DeVito Z., Raison M., Tejani A., Chilamkurthy S., Steiner B., Fang L., Bai J., Chintala S. PyTorch: An imperative style, high-performance deep learning library, Advances in Neural Information Processing Systems, Wallach H., Larochelle H., Beygelzimer A., d’Alché-Buc F., Fox E., Garnett R., Eds., Curran Associates, 2019, vol. 32. https://proceedings.neurips.cc/paper_files/paper/2019/file/bdbca288-fee7f92f2bfa9f7012727740-Paper.pdf.
Arquivos suplementares
