Skip to main content
SearchLoginLogin or Signup

Building, processing, and sharing 3D photogrammetric data: an archaeological viewpoint

Published onJul 20, 2021
Building, processing, and sharing 3D photogrammetric data: an archaeological viewpoint
·

Abstract. This paper explores how information related to ancient archaeological objects can be visualized and shared using the technique of 3D photogrammetry, a process in which 2D photographs are converted into 3D digital models. It presents a case study utilizing ancient metallurgical objects, from Niger, West Africa, to examine the applicability of this technique in creating and documenting 3D models, archiving culturally important data, and sharing research materials digitally. This work is important not only to those who aim to create visualizations of objects relevant to their research, but also to citizen science initiatives that aim to share these data in accessible ways.

Corresponding author: [email protected]

1 Introduction

This paper examines photogrammetric methods of 3D image generation through a case study involving archaeological objects from Marandet, Niger. This ongoing digitization project has spanned 3 years, over 5000 photos, and 40 digital models. Digitizing archaeological objects is a useful research practice for three reasons. First, it enables more collaborative research, especially when circumstances make it challenging to travel or work together in person (i.e., during a pandemic). Artifact models can be studied simultaneously in multiple different locations. Second, digitizing culturally important objects helps preserve information about them and can lead the way for a more non-destructive forms of archaeological research. Often, archaeological analysis can involve removing artifacts from their original location or regions/countries of origin for more detailed analysis somewhere else. 3D documentation allows for cultural heritage data to be generated and preserved in a digital record which can be shared easily around the world. Lastly, 3D modelling is useful for detailed metrics analyses of the object itself. For example, precise volumetric measurements can be produced from these models, which can provide essential information for the archaeological research.

2 Methods

This project was designed to experiment with a variety of photographic and data processing methods. Although the resulting models made from these methods certainly look similar, they are formulated through different techniques that affect the resulting pixel density, scale accuracy, and 3D model hole-filling. There are also differences in relative ease of use, necessary equipment and time requirements. A comparison of these methods can help researchers find the solutions that are right for them.

2.1 Photography in a Laboratory Setting

Those photographs taken in a laboratory setting were produced using a Canon Mark II digital SLR camera. Adjustable box lights and a white backdrop provided a controlled background and lighting environment for the photography. Objects were placed on a manual turntable that was rotated 10 degrees for each photograph. This allowed a high degree of overlap between photos, which is useful for image matching and model production [1]. Lastly, an adjustable tripod provided camera stability along a fixed depth of field. By measuring distance from an object and putting camera settings into a depth of field calculator, the same focus can be applied to all parts of the object during photography. These photos highly detailed (in 4K or greater pixel resolution) and optimal for processing in both photogrammetry softwares, Meshroom and Agisoft, evaluated for this project.

Figure. 1. Example of a typical laboratory setting for 3D photogrammetry documentation (Photograph by I. Miller).

2.2 Photography in a Field/Museum Setting

This project aimed to evaluate if accurate photographs could be done outside of the laboratory in an informal “field” or museum setting without access to much of the more expensive and difficult to transport equipment. Various phone cameras and a simple headlamp flashlight with no background were used for these trials. Instead of a precise turntable and tripod, the photographer walked around/or incrementally rotated the object and took autofocus pictures. These trials were repeated across a variety of Apple, Windows, and Android phones currently available in the USA market. Additionally, substantially less photos were taken in these field experiments. While the laboratory photography sets comprised around 250-300 photographs each, the field photography sets had around 70-100 photographs.

2.3 Photograph Processing: Open-Source Solutions

Two types of software were used to process these photographs into 3D models. The first was Agisoft Metashape Pro. Agisoft is a for-pay software that uses proprietary algorithms. Although Agisoft does include basic metadata about camera parameters and point-cloud characteristics, it is impossible for these models to be completely replicable because of the manual point deletion processes necessary in the use of this software. Agisoft’s greatest strength lies in its ability to edit out background noise. Parts of each photograph can be masked, which allows the photogrammetry algorithm to focus on the correct pixels to match.

Meshroom is a free, open-source software that is available for Windows and Linux operating system devices. Meshroom can utilize various algorithms for photogrammetric processing and automates each step. This means that developing standard Meshroom settings for your model creation protocol can make the 3D modelling a replicable process. Furthermore, the metadata for each step in the processing is saved. This allows for in-depth exploration of how the models are created. Meshroom does have difficulty processing when there are other objects in the background. However, a portable backdrop was able to alleviate this problem. Multiple other settings were manipulated to help Meshroom further eliminate data background “noise”:

Feature Matching

“Guided Matching” allows for more accurate camera placement at the expense of a longer processing time.

Structure from Motion

Including both the Sift and Akaze algorithms have yielded the most efficient balance of accuracy and speed.

Mesh Filtering

“Keep only the largest mesh” helps remove background noise.

Mesh Resampling

This node can be added between mesh filtering and texturing to make the mesh triangles more equivalent in size, reducing processing time.

Texturing

For more detail, texture size should be set to 4096 and the unwrap method should be LSCM.

Figure. 2. Optimal Meshroom settings used in this project.

3 Results

3.1 Volumetric Evaluation Using Mesh-to-Mesh Comparison

By using the software Cloud Compare, the relative height of the 3D mesh can be calculated. Cloud Compare uses these data to compute the volume inside of the object. These data are useful for archival purposes and allow for a comparison of 3D volumes generated across multiple processing methods. Across the photography and processing differences tested in this project, the 3D models volume were 99.7% similar. This proves that informal methods of photography combined with open-source processing can still lead to accurate results.

Another method of mesh-mesh comparison simply overlaps both models onto one another and calculates their spatial difference. A histogram is generated with a normal distribution of these differences. The mean and standard deviation approaching zero in these comparisons substantiates the prior analysis.

Figure. 3. Volume data using relative height measurement.

Figure. 4. Overlap of multiple models.

3.2 Cross-Section Analysis

This project also aimed to test object similarity by comparing different cross-sections of models. Cloud Compare again was used to section models and overlap these sections onto one another. This once again revealed a negligible difference across models generated within the laboratory and “field” conditions scenarios.

Figure. 5. Cross section overlap of two different models.

3.3 Scale Accuracy

As part of testing field methods for processing objects, this project tested informal scale bars used in the photographs. The laboratory scale bars were specially ordered to be accurate to 0.001 millimeters. However, in a field setting, one may only have access to a simple ruler or even a pen/pencil. Models created with informal measurement techniques were found to be still accurate to 0.1 millimeters, which validates the accuracy of the informal field methods.

4 Discussion

4.1 Accessible Data Sharing

4.1.1 Online Solutions

The large download size of fully-textured 3D models presents challenges in sharing them. Although these files can be compressed and shared via email, this solution has accessibility challenges for those who do not have software that can view these files. Online hosting can alleviate these issues. For example, the website Sketchfab <https://sketchfab.com> offers model hosting services. Sketchfab also allows for digital object identifiers to be embedded into the models themselves. This enables data to be protected online.

4.1.2 Physical Solutions

For those who have access, 3D printing can lead to valuable physical sharing of data. We found that 3D reproductions of artifacts can be created with high degrees of accuracy. This allows for tangible reproductions to be shared physically with other people or for cultural heritage displays where the original object is not available.

4.2 Comparing Open-Source Software

Overall, we found that Meshroom was a useful way to process our models. Although it has a longer processing time, Meshroom can allow truly replicable models to be created. The hands-off approach that does not require manual point deletion means that the same setting parameters can be used with the same photographs to create a perfectly similar model. This ensures that research data can be tested and replicated, providing a strong foundation for accurate model creation and comparison.

4.3 Comparing Photography Methods

Photographs taken in a laboratory setting certainly had advantages in their texture and color. However, as demonstrated by the mesh-mesh comparisons, phone cameras can be used to create spatially accurate models useful for research purposes.

5 Conclusion

Archaeological analysis can be done with 3D models, which enables you to study an object without physically keeping or owning it. Photography done in informal field or museum settings can be combined with open-source software to create highly accurate digital reproductions of artifacts. The resulting models can be shared online and protected using licensing and digital object identifiers. This accessible and useful workflow supports the use of 3D modelling for preserving archaeological data.

Acknowledgements:

Special thanks to Dr. Susanne Garrett, Kristi Wyatt, and the University of Oklahoma 3D Scanning Laboratory, Bizzell Memorial Library, Norman, Oklahoma.

References

  1. Zhang, Y., Xiong, J. and Hao, L. (2011), Photogrammetric processing of low‐altitude images acquired by unpiloted aerial vehicles. The Photogrammetric Record, 26: 190-211. https://doi.org/10.1111/j.1477-9730.2011.00641.x

Comments
0
comment
No comments here
Why not start the discussion?