ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume X-M-2-2025
https://doi.org/10.5194/isprs-annals-X-M-2-2025-293-2025
https://doi.org/10.5194/isprs-annals-X-M-2-2025-293-2025
24 Sep 2025
 | 24 Sep 2025

A Non-Uniform Processing Method (NUPM) for Large Photogrammetry Datasets: Case Study of Shams-ol-Emareh, Golestan Palace (UNESCO World Heritage Site), Iran

Amin Ranjbari, Atiyeh Ghorbani, and Andrea Jalandoni

Keywords: Photogrammetry, Non-uniform 3D Modeling, Big Data, Iran, Cultural Heritage

Abstract. Cultural heritage documentation benefits from high-quality 3D models that are geometrically and aesthetically accurate. Photogrammetry is used worldwide, yet increases in image resolution, image overlaps, and a combination of drone and terrestrial imagery have resulted in large datasets that raise new challenges. Conventional processing of large datasets requires expensive computational systems with high-capacity RAM. It also leads to large output files that are difficult to store; cloud share; or use in virtual reality, augmented reality, and gaming engines. In this study, we propose a non-uniform processing method (NUPM) to handle the 3D reconstruction of the Shams-ol-Emareh building of the Golestan Palace UNESCO World Heritage Site in Iran. We processed large datasets of photogrammetry on a consumer-grade computer to produce low-size point clouds and meshes with efficient texture size and resolution without sacrificing quality. The workflow first fragmented the object based on importance and roughness, processed each fragment separately, and then joined the fragments together. Non-uniform processing also meant that points, triangles, and pixels with a low level of importance were deleted from all parts of the object. The result was a point cloud, mesh, and texture where the space between points as well as the size of triangles and pixels were variable and non-uniform. In some cases, the number of points in the point cloud and triangles in the mesh were respectively reduced by more than 90% and 97%, leading to usable output sizes without any loss in data quality.

Share