Reconstructing Mount Rushmore from Tourists’ Photographs

Image

One of my first posts here was about the creation of 3D photorealistic models using tourists’ photographs, the main aim of which is the possibility of recreating monuments that were around since the invention of photography and that have since been destroyed or damaged. Also, it is a way of testing the size limits of the technology, which previous work has pushed as far as 10 m by 10 m with relative accuracy using aerial photographs. In the other post I attempted to create a model of the Sphinx, using images I had found online with a simple Google search. I had used more than 200 images, with some interesting results.

Although far from complete I managed to create slightly less than half of the Sphinx, which however allowed me to draw some conclusions. The main one was that the technology is indeed capable of making large scale models from tourists’ photographs, the only issue is understanding what combination of images allows this. The images that had been stitched together (only 10% of the total) didn’t seem to follow a pattern, and a series of limitations produced no improvement.

Hence I decided to move on to another model to understand what combination of photographs is needed. I therefore chose around 50 photographs of the Statue of Liberty from all angles and attempted to run them, with only 5 creating a somewhat unsuccessful mesh. Given this small number it was hard to draw any conclusions, other the suspicion that more photos can improve the quality of the model. One of the main issues I had though was actually finding photographs of all the statue. While many images of the front were available, the sides and back were practically un-photographed, and the lower part of the body was entirely ignored in some angles.

Given this I moved onto another monument, Mount Rushmore. The reason for choosing this was that given it is only 180 degrees it requires less angles to photograph. Also, the aforementioned work with aerial photography seemed to suggest that the combination of the depth of perspective of Mount Rushmore and the fact parts of it don’t cover other areas, all that would be needed is a slight variation from a single position (like in stereophotography). I selected 40 images from google search, most of which appear to be taken by the same viewing point.

Image

Similarly with the Sphinx, the results are not amazing, but they do narrow down the process. The mesh looks great from around the theoretical viewing point, but the move you rotate it the more it becomes warped and distorted, especially the faces further away. The lack of multiple images from other angles are clearly taking their toll, as the program is not managing to create the sides which become stretched. Anything that can be seen from the viewing point appears to be accurate, but the rest is much approximated. Therefore the idea of a single angle with variations has to be discarded.

Image

The good think though is that only 7 images were unstitched, while 4 were stitched incorrectly. The only reason I can see of this is the great similarity of the stitched images, which have relatively consistent lighting and colours. This therefore may be the key to the entire process, which explains why the Sphinx was harder to achieve, due to a great difference in the photographs’ colour and light. The images that were placed wrongly were taken from a greater distance than the others, which may suggest a relatively consistent distance may be another factor, and the main reason for the failing of the Statue of Liberty.

Finally the fact this was done with only 40 images, over the 200 of the Sphinx shows that it is not the number that affects the model but it’s the actual image quality and consistency.

The results of course are not acceptable for either of the aims of this theory, and even though I am slowly narrowing down the ideas to make this work it still requires many more tests. The next step is the creation of an Access Database to try and narrow down the key elements that may affect the results.

Image

Advertisements

Working on the Sphinx in 123D Catch

The other day I was talking with one of the guys at work and while chatting about Photogrammetry an idea came to mind. There are millions of photographs online taken by tourists who are visiting archaeological sites, and famous monuments like the Parthenon or the Pyramids surely have their photograph taken many times a day. Therefore there is an enormous bank of images of archaeology available at our fingertips, and if it is possible to reconstruct features and structures using as little as 20 images, surely there must be a way to reconstruct through Photogrammetry these monuments without the need to even be there.

As a result of this pondering I decided to test this idea by choosing a building that would be simple enough to reconstruct, yet famous enough to allow many images to be found. A few months ago I had started working on the reconstruction of Stone Henge from a series of photographs I had found online, but with scarce (yet useful) results. Hence I decided to try with something solid, like the Great Sphinx.

I then proceeded to find all possible images, collecting approximately 200 of them from all angles. Surprisingly the right side of the monument had many less pictures than the rest of it. I then run the photos in different combinations in an attempt to reconstruct as accurate as a model as possible.

The following is the closest result I have had yet:

Image

Agreed, much more work is needed. But honestly I am surprised that I managed to get this far! The level of detail is quite surprising in some places, and if half of it has worked the other half should also be possible. The matter is finding the right combination of images.

So far I have tried:

  • Running only the high quality images.
  • Running only the images with camera details.
  • Running 20 photos selected based on the angles.
  • Creating the face first and then attaching the rest of the body.
  • Running only images with no camera details (to see if any results appeared, which they did).
  • Increasing the contrast in the images.
  • Sharpening the images.
  • Manually stitching the remaining images in.

All of these have produced results, but lesser than the one above. Also, I have tried running the images in Agisoft Photoscan with similarly poor results.

The model above was done by pure coincidence by uploading 100 of the images, so I’m hoping by going through the images selected I may be able to find a pattern of selection.

I shall post any updates here, so stay tuned!