Researchers Create 3D Model of Ancient Stone Sculpture From a Single 134-Year-Old Photo

Ancient stone relief depicting a group of seated and standing figures in traditional attire. The scene includes trees and an architectural structure in the background, suggesting a historical or cultural narrative.
The team took a photo of a temple relief taken in the 1800s like this one and created a 3D model of it.

Scientists have created a 3D model of a buried relief sculpture by using a photo taken in the 1800s and novel AI technology.

The researchers from Ritsumeikan University in Japan developed a neural network capable of looking at a standard 2D photograph of a 3D object and producing a digital reconstruction in 3D.

In this case, the team looked at a photo showing figures carved into stone, known as a relief, that is buried in Borobudur Temple in Indonesia — a UNESCO World Heritage Site and the world’s largest Buddhist temple compound.

According to Gizmodo, the black-and-white photo was taken 134 years ago of a relief that was only temporarily exposed because of reconstruction work. Photographs were taken of the relief before they were buried again and have been for the last century.

X-ray style image depicting a group of people seated and standing. They appear to be engaged in various activities, with some holding objects. The grayscale tones create a skeletal or outlined effect.
A depth map of the temple relief.

Other research teams have tried making 3D reconstructions but couldn’t because of the compression of depth values.

“Previously, we proposed a 3D reconstruction method for old reliefs based on monocular depth estimation from photos. Although we achieved 95% reconstruction accuracy, finer details such as human faces and decorations were still missing,” explains Professor Satoshi Tanaka from Ritsumeikan University.

“This was due to the high compression of depth values in 2D relief images, making it difficult to extract depth variations along edges. Our new method tackles this by enhancing depth estimation, particularly along soft edges, using a novel edge-detection approach.”

Infographic titled "Reconstructing and Preserving Cultural Heritage from a Single Old Photo." It explains using a novel multi-task neural network for 3D digital reconstruction, comparing it to traditional methods. Includes images and detailed process steps.

The team’s multi-modal neural networks perform three tasks: semantic segmentation, depth estimation, and soft-edge detection, which work together to enhance the accuracy of 3D reconstruction.

The core strength of the network lies in its depth estimation, achieved through a novel soft-edge detector and an edge-matching module. Unlike the conventional binary edge classification, the soft-edge detector treats edge detection of relief data as a multi-classification task.

Edges in relief images not only represent changes in brightness but also variations in curvature, known as “soft edges”. The soft-edge detector determines the degree of “softness” of these edges in relief images, enhancing depth estimation.

The edge matching module comprises two soft-edge detectors that extract multi-class soft-edge maps and a depth map, from an input relief photo. By matching and detecting differences between the two maps, the network focuses more on the soft-edge regions, resulting in more detailed depth estimation.

Finally, the network optimizes a dynamic edge-enhanced loss function, which includes loss from all three tasks, and produces clear and detailed 3D images of reliefs.

You can read the team’s paper here.


Image credits: Pan et al. 2024

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment