The Winged Victory of Samothrace: Analysis of the Images

This is a continuation of my blog from yesterday, so I suggest you read that first. One of the things I’ve been working on the past month is creating a photogrammetric model of monuments using nothing but tourists’ photographs. After many attempts my last test, using the Winged Victory of Samothrace as a model, seemed to have sufficiently good results, so much that I decided to analyse the image data to pinpoint what kind of photographs actually stitch together in 123D Catch, and which ones give problems. This way we can actually choose a limited amount of good photographs, rather than a vast number of mixed ones.

Image

In order to understand the patterns behind the stitching mechanism, I created an Excel file in which I included all images with certain details: width and height, if the background is consistent with the majority of the photographs, if the colour is consistent, if the lighting is the same, the file size, and the position of the object. The last one is based on the idea that to make 3d models we need series of shots from 8 positions, each at 45 degrees angle from one another. If we are thinking of it like a compass, with North corresponding to the back of the object, position 1 is South, 2 is SW, 3 is W, and so on up to 8 is SE.

The first thing I noticed, which ties in with what said yesterday, is the lack of photographs in positions 4 and 6 (NW and NE), which of course meant that all the images from the back (position 5) also had trouble stitching. Therefore the first problem for the model is the need of enough images from all angles, without which it is inevitable to have missing parts. This is made harder by the fact that these are typically positions that are not photographed.

Having concluded that images from position 4-5-6 had this reason for not stitching, I removed them from the list so the data would be more accurate for the others.

I then averaged the height, width and file size of the stitched images and those of the unstitched images and then compared them. The former had an average height of 1205.31, a width of 906.57 and a file size of 526.26, while the latter had a height of 929.07, a width of 668.57 and a file size of 452.57. The differences here are enough to suggest that large good quality images have a higher chance of being stitched. This may seem obvious, but actually some images that have not stitched are larger and higher quality than some which have, so this can’t be the only factor. Also, the difference isn’t large enough to suggest it is even a key factor.

Image

Image

The next step was analysing the percentage of stitched images that had abnormal background, colour and lighting to the unstitched ones, which had even more limited results. In the former, the background was not constant in 15% of the images, the colour in 63% and the lighting in 47%, while in the latter background was 0%, colour was 50% and lighting 50%. Somehow these results show that the photographs have a higher chance of being included if they are different from the rest, which goes against all the knowledge we have up to now of 123D Catch. I therefore suspect that the program allows a good tolerance when it comes to these elements, and that I may have been to harsh in defining the differences in these elements.

Having concluded little with this methodology I decided to look at each individual unstitched image to see what elements could be responsible. This proved to be much more successful. I managed to outline three categories of unstitched images which explain every single one of the photographs, without any overlap with the stitched ones:

  • Distant shots: Some photographs were taken at a further distance from the statue than others, and while a certain degree of variation is acceptable, in these it was too extreme.

Image

  • Excessive shadows: Although as we’ve seen before the lighting didn’t appear to be a factor, some of the images had an extreme contrast, with very unnatural light. They were practically at the edge of the scale, and while some variation is acceptable, these were well beyond that.

Image

  • Background: this is an interesting case in which the background is not different  rom the rest, but has a very similar colour to the rest of the object. Because of the similarities it is difficult for the program to recognise any edges, which makes it impossible to be stitched correctly.Image

Therefore creating 3d models from tourists’ photographs is entirely possible, as long as we have sufficient angles, photographs from a similar distance, without harsh lighting and with a contrasting background.

Advertisements

The Winged Victory of Samothrace Reconstructed Digitally Using Tourists’ Photographs.

Image

If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.

The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.

In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.

Image

The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.

The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3

Image

As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.

Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.

Image

Reconstructing Mount Rushmore from Tourists’ Photographs

Image

One of my first posts here was about the creation of 3D photorealistic models using tourists’ photographs, the main aim of which is the possibility of recreating monuments that were around since the invention of photography and that have since been destroyed or damaged. Also, it is a way of testing the size limits of the technology, which previous work has pushed as far as 10 m by 10 m with relative accuracy using aerial photographs. In the other post I attempted to create a model of the Sphinx, using images I had found online with a simple Google search. I had used more than 200 images, with some interesting results.

Although far from complete I managed to create slightly less than half of the Sphinx, which however allowed me to draw some conclusions. The main one was that the technology is indeed capable of making large scale models from tourists’ photographs, the only issue is understanding what combination of images allows this. The images that had been stitched together (only 10% of the total) didn’t seem to follow a pattern, and a series of limitations produced no improvement.

Hence I decided to move on to another model to understand what combination of photographs is needed. I therefore chose around 50 photographs of the Statue of Liberty from all angles and attempted to run them, with only 5 creating a somewhat unsuccessful mesh. Given this small number it was hard to draw any conclusions, other the suspicion that more photos can improve the quality of the model. One of the main issues I had though was actually finding photographs of all the statue. While many images of the front were available, the sides and back were practically un-photographed, and the lower part of the body was entirely ignored in some angles.

Given this I moved onto another monument, Mount Rushmore. The reason for choosing this was that given it is only 180 degrees it requires less angles to photograph. Also, the aforementioned work with aerial photography seemed to suggest that the combination of the depth of perspective of Mount Rushmore and the fact parts of it don’t cover other areas, all that would be needed is a slight variation from a single position (like in stereophotography). I selected 40 images from google search, most of which appear to be taken by the same viewing point.

Image

Similarly with the Sphinx, the results are not amazing, but they do narrow down the process. The mesh looks great from around the theoretical viewing point, but the move you rotate it the more it becomes warped and distorted, especially the faces further away. The lack of multiple images from other angles are clearly taking their toll, as the program is not managing to create the sides which become stretched. Anything that can be seen from the viewing point appears to be accurate, but the rest is much approximated. Therefore the idea of a single angle with variations has to be discarded.

Image

The good think though is that only 7 images were unstitched, while 4 were stitched incorrectly. The only reason I can see of this is the great similarity of the stitched images, which have relatively consistent lighting and colours. This therefore may be the key to the entire process, which explains why the Sphinx was harder to achieve, due to a great difference in the photographs’ colour and light. The images that were placed wrongly were taken from a greater distance than the others, which may suggest a relatively consistent distance may be another factor, and the main reason for the failing of the Statue of Liberty.

Finally the fact this was done with only 40 images, over the 200 of the Sphinx shows that it is not the number that affects the model but it’s the actual image quality and consistency.

The results of course are not acceptable for either of the aims of this theory, and even though I am slowly narrowing down the ideas to make this work it still requires many more tests. The next step is the creation of an Access Database to try and narrow down the key elements that may affect the results.

Image

Working on the Sphinx in 123D Catch

The other day I was talking with one of the guys at work and while chatting about Photogrammetry an idea came to mind. There are millions of photographs online taken by tourists who are visiting archaeological sites, and famous monuments like the Parthenon or the Pyramids surely have their photograph taken many times a day. Therefore there is an enormous bank of images of archaeology available at our fingertips, and if it is possible to reconstruct features and structures using as little as 20 images, surely there must be a way to reconstruct through Photogrammetry these monuments without the need to even be there.

As a result of this pondering I decided to test this idea by choosing a building that would be simple enough to reconstruct, yet famous enough to allow many images to be found. A few months ago I had started working on the reconstruction of Stone Henge from a series of photographs I had found online, but with scarce (yet useful) results. Hence I decided to try with something solid, like the Great Sphinx.

I then proceeded to find all possible images, collecting approximately 200 of them from all angles. Surprisingly the right side of the monument had many less pictures than the rest of it. I then run the photos in different combinations in an attempt to reconstruct as accurate as a model as possible.

The following is the closest result I have had yet:

Image

Agreed, much more work is needed. But honestly I am surprised that I managed to get this far! The level of detail is quite surprising in some places, and if half of it has worked the other half should also be possible. The matter is finding the right combination of images.

So far I have tried:

  • Running only the high quality images.
  • Running only the images with camera details.
  • Running 20 photos selected based on the angles.
  • Creating the face first and then attaching the rest of the body.
  • Running only images with no camera details (to see if any results appeared, which they did).
  • Increasing the contrast in the images.
  • Sharpening the images.
  • Manually stitching the remaining images in.

All of these have produced results, but lesser than the one above. Also, I have tried running the images in Agisoft Photoscan with similarly poor results.

The model above was done by pure coincidence by uploading 100 of the images, so I’m hoping by going through the images selected I may be able to find a pattern of selection.

I shall post any updates here, so stay tuned!