If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.
The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.
In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.
The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.
The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3
As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.
Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.
I just lately have discovered the world of photogrammetry and art/archaeology. A long time ago as a kid growing up in rural Texas the photos in books at the local public library were my point of access to the distant past worlds of ancient Greece, Rome and Egypt.
In the 1990s limited access to many museum collections came online.
Since then, the Internet is now saturated with photos of everything from nearly every point of time. I’m starting to think that with the wide availability of photos from museum and institution websites, coupled with other collections like Flickr and a quick search of Google Images that any kind of ancient work of art or artifact could be modeled and recreated.
But, as a newcomer to the field of photogrammetry, I’m finding there are some problems for me in particular. The first is software. I’m a Mac person through and through. I just can’t seem to grasp other platforms. I’ve been playing with the sparse number of titles available to the Mac platform and the results have not always been the most encouraging. It is partly due to the rigid limitations of some software requiring that all the photos have been taken at the same resolution by the same camera and be of the same size. It is also the great complexity that some works of art present with their volumes and shapes plus the surfaces with their patinas. The other major issue is personal photography if I chose to travel a long distance to the nearest institution or museum to take photos.
It seems like many institutions and museums are quite protective of their ancient works of art, citing a byzantine maze of rules and laws for why reproduction is not allowed. Now many of those same institutions are moving to restrict photography. I think it is more than a coincidence since with enough effort a person “could” digitally recreate a work of art and make a reasonably accurate and detailed replica with the resulting file if they had access to good photos and a 3D printer.
My stumbling block is finding friendly Mac software that can essentially be flexible enough to cope with a wide range of images of different resolutions from different sources. So far, I’ve not had very good luck there. I don’t even think there is any Open Source or Windows software that can do that, at least, not yet.
I only just found your blog within the past hour, so I have a great deal of reading to do, but I was wondering if you have talked to anyone who is just a regular person like myself with an armchair fascination for the past who wants to put photogrammetry to work like I am trying to do?
Hi Jerry,
Thanks for your response. The idea of creating digital models from photographs of museum artefacts is indeed a fascinating one, and at least as a theoretical discussion it should be possible to do so with the right preparation and tools. Having said that, in practice it will require a lot of trial and error, as there are so many factors to consider with Photogrammetry. I am a die-hard Windows user unfortunately, and I am a great fan of 123D Catch, but there are many other programs that give you as good (or better) results and that should be compatible with Macs too. I was reading this article (http://makezine.com/2013/08/21/diy-photogrammetry-open-source-alternatives-to-123d-catch/) which suggests a few more open source programs that you may be interested in. I am currently experimenting with VisualSFM and Meshlab, with good results so I can recommend that. What I would suggest though is probably to start off with basic Photogrammetry to test the program out, just to understand how the software reacts to certain situations, and then move onto more complex issues like the ones you are talking about. That way it will be easier to understand what needs to be altered if the results aren’t exactly what they should be.
As an idea though I like it a lot. It really has a lot of potential and it would really make a difference in how we display and preserve archaeological evidence. So best of luck with your endeavour and if you need any suggestions just let me know.
Thanks again,
Rob Barratt