Reconstructing Mount Rushmore from Tourists’ Photographs

Image

One of my first posts here was about the creation of 3D photorealistic models using tourists’ photographs, the main aim of which is the possibility of recreating monuments that were around since the invention of photography and that have since been destroyed or damaged. Also, it is a way of testing the size limits of the technology, which previous work has pushed as far as 10 m by 10 m with relative accuracy using aerial photographs. In the other post I attempted to create a model of the Sphinx, using images I had found online with a simple Google search. I had used more than 200 images, with some interesting results.

Although far from complete I managed to create slightly less than half of the Sphinx, which however allowed me to draw some conclusions. The main one was that the technology is indeed capable of making large scale models from tourists’ photographs, the only issue is understanding what combination of images allows this. The images that had been stitched together (only 10% of the total) didn’t seem to follow a pattern, and a series of limitations produced no improvement.

Hence I decided to move on to another model to understand what combination of photographs is needed. I therefore chose around 50 photographs of the Statue of Liberty from all angles and attempted to run them, with only 5 creating a somewhat unsuccessful mesh. Given this small number it was hard to draw any conclusions, other the suspicion that more photos can improve the quality of the model. One of the main issues I had though was actually finding photographs of all the statue. While many images of the front were available, the sides and back were practically un-photographed, and the lower part of the body was entirely ignored in some angles.

Given this I moved onto another monument, Mount Rushmore. The reason for choosing this was that given it is only 180 degrees it requires less angles to photograph. Also, the aforementioned work with aerial photography seemed to suggest that the combination of the depth of perspective of Mount Rushmore and the fact parts of it don’t cover other areas, all that would be needed is a slight variation from a single position (like in stereophotography). I selected 40 images from google search, most of which appear to be taken by the same viewing point.

Image

Similarly with the Sphinx, the results are not amazing, but they do narrow down the process. The mesh looks great from around the theoretical viewing point, but the move you rotate it the more it becomes warped and distorted, especially the faces further away. The lack of multiple images from other angles are clearly taking their toll, as the program is not managing to create the sides which become stretched. Anything that can be seen from the viewing point appears to be accurate, but the rest is much approximated. Therefore the idea of a single angle with variations has to be discarded.

Image

The good think though is that only 7 images were unstitched, while 4 were stitched incorrectly. The only reason I can see of this is the great similarity of the stitched images, which have relatively consistent lighting and colours. This therefore may be the key to the entire process, which explains why the Sphinx was harder to achieve, due to a great difference in the photographs’ colour and light. The images that were placed wrongly were taken from a greater distance than the others, which may suggest a relatively consistent distance may be another factor, and the main reason for the failing of the Statue of Liberty.

Finally the fact this was done with only 40 images, over the 200 of the Sphinx shows that it is not the number that affects the model but it’s the actual image quality and consistency.

The results of course are not acceptable for either of the aims of this theory, and even though I am slowly narrowing down the ideas to make this work it still requires many more tests. The next step is the creation of an Access Database to try and narrow down the key elements that may affect the results.

Image

Google Sketchup and Archaeology: Reconstructing the Parthenon

Parthenon Rendered 6

As part of my second year studying archaeology in Cardiff, I was required to write 5000 words on a topic of my choosing for a project called an Independent Study. Having only recently completed the first two models I ever made for a documentary on the medieval site of Cosmeston, South Wales, I decided it would be a great idea to further investigate this aspect of archaeology. I therefore decided to write about the use of new media in archaeology, and look into 3d modelling, photogrammetry and documentary making.

As I wanted to try out these new programs as much as possible I decided to test myself with something large and complex, yet well-known enough to allow me access to a large database of plans, images and measurements. For this reason I chose to reconstruct the Parthenon, using a great program called Google Sketchup.

Image

Google Sketchup is still my number one choice when dealing with reconstruction from plans, due to simplicity of the program and the still great results that can achieved. For less experienced users the simple mechanism of pushing and pulling surfaces is ideal, and it’s easy to pick up how it all works thanks to the user-friendly interface.

The great advantage of reconstructing the Parthenon was the fact that I could copy and paste most of the features, which meant I didn’t have to create every single column and brick. But it also meant that I had to quickly learn one of the key elements of sketchup, which is making components. By making components you don’t end up with thousands of lines, each a separate entity, but with a series of elements that work as a whole, meaning you can easily select them and copy/paste, move or rotate them. It also means they don’t pull everything they are connected to when you place them somewhere else. Hence I quickly learnt that having a series of lines to select and copy each time is much less efficient than having the same lines as a separate entity “roof_tile” which you can copy with two clicks. The great advantage of this I also learnt when making my second model, a Greek house, for which I decided to simply import the Parthenon roof component and edit it to make it smaller, rather than make the roof from scratch.

Image

The second thing I soon found out about the Parthenon is that it’s a large thing made of small things. For example, the ceiling wasn’t a big mass of stone, but more likely a decorated series of boxes and beams, which I couldn’t for the life of me create while I had the rest of the walls in place. Hence I started doing the sneaky thing of editing parts of the Parthenon in a work area far away from the actual model, then make them into a component, move them into position and scale them to fit. The result was as good as if done in the right position, but with so many less issues.

Image

Thirdly, experimenting. This is what makes you a good 3d modeller, if it’s archaeological or not. It’s all good following a set of guidelines, but what happens when you rotate a corner, or draw a square on a brick and pull it out? 9 out of 10 times I tried something new I ruined the Parthenon, I then pressed ctrl+z or reopened the save file and then tried something new. 1 out of 10 times I discovered a new and amazing thing, which would save me hours of work or make the model better. The more curious you are with 3d modelling, the more you learn.

Finally, reconstructing something the size of the Parthenon showed me that archaeology and 3d modelling really work hand in hand. If a teacher is explaining to his class how the Parthenon looked like, why not show it in 3d? If we are debating about what it would look like when it was painted, using the paint tool in Sketchup instantly shows the results. If we are thinking of lighting/darkness levels within the inner chamber V-ray for Sketchup allows us to try it out any day of the year or time of the day.

Therefore, here is the link to the finished model (even though Sketchfab does it no justice at all): https://sketchfab.com/show/4EKCWxne5OUE4rBj2QRJbsHNj0L

Image

And if you are interested in starting modelling, but are not sure about it, download Sketchup and give it a go! I assure you it is easy and entertaining, and you’ll learn modelling in no time.

8 Reason Why We Should Be Using Photogrammetry in Archaeology

arrow head 1

If you are an archaeologist you should be using Photogrammetry because:

  1. It is easy to use: Unless you are dealing with something extremely large or extremely complex, Photogrammetry has an extremely high success rate. When it was still based on camera calibration, complex calculations and precise measuring was necessary, but with more modern programs often all that is needed is to take the photos and upload them. Decent models are easy to produce, and more complex ones are achievable without issue with some experience. Overall, anyone could potentially use it in small scale archaeology with no experience, and on large scale with limited training.
  2. It is quick: With a good internet connection I can probably model a single feature in under 10 minutes. And by single feature I mean anything from a posthole to a stone spread. In situ finds could be recorded in no time, cutting back on the need to plan everything by hand. A complex stone wall could be preserved for the archaeological record simply by taking a few dozen photographs, and sections can be recorded with much more realism than any hand drawn plan can achieve. A rough sketch of the section would of course help the interpretation, but the measuring time would considerably go down, as it would be possible to measure on the model using Meshlab.
  3. It is practical: Laser scanning is the current fashion in archaeology, but the problem with laser scanning is that you need to provide expensive equipment, you need to carry that equipment around and need to train specific people to use the machines and the software. Photogrammetry requires nothing more than a camera and a laptop, which are usually much more accessible on site. If a delicate object is found, that may not survive excavation, it is much easier to take some photographs with the site camera, to then edit later, than to bring in the equipment to laser scan it.
  4. It is accurate: As shown in one of my recent posts, the accuracy of 123D Catch is extremely good for the type of process. Although it cannot compete with that provided by laser scanning, an error margin of less than 1% means that any task required for interpretation can be carried out without having to worry of the results. The level of accuracy is ideal for presentation, for interpretation and for recording.
  5. It is photorealistic: No other technology gives you the photorealism that can be achieved by Photogrammetry. Due to the fact that at the base of the models we have the photos, and that the finished product contains .mtl files that record the exact position of the photographs, the surfaces of the features can be recorded as they are in real life. The models seem realistic because they are not a simple collection of points, but a combination of points and images.Image
  6. It is entertaining: Archaeology is not simply about recording the past, it is also about getting the information out there, to the general public. It is important that anyone interested in an archaeological has the opportunity to learn about the site itself. Academic texts are amazing when carrying out research, but for the average archaeological enthusiast, who lives in a now mostly digital world, texts can be seen as confusing. Photogrammetry provides a visual component to the archaeological record, making it possible for people to see from their own living room the archaeology, as if they were actually present at the site.
  7. It is constantly improving: At the moment there are some problems and flaws with the programs that may cause concerns to more traditional archaeologists. These problems however are only temporary. With such a great interest in the digital world, teams of developers are constantly trying to update and improve all software, and if at present programs like 123D Catch are not perfect, they can only get better. Also, 10 years ago this level of accuracy in Photogrammetry was unheard of, yet today it has got to this point. In another 10 years how much will the programs change for the better?
  8. It is not as simple as it looks, in a good way: There are different levels to Photogrammetry. The basic level is the simple recording of features and artefacts for the only reasons of recording and presenting. There is however a second level, which uses the models created to analyse archaeology, like I show in my previous post about finding inscriptions in coins. There is a third level, which alters the way the programs are used, by changing a part of the process to get greater results. An example is the attempt I did on reconstructing the Sphinx using tourists’ photographs, or the idea of using series of photographs in archaeological archives to reconstruct features long gone. Finally the fourth level is the more interesting one. It’s using many Photogrammetric models to create a single model, i.e. recreating pots by putting fragments together digitally or entire sites by gluing together individual features. So it is not only pretty models of features, it is much more.Image

Modelling Large Scale Features with 123D Catch

Image

In the previous entries we have seen the use of Photogrammetry in archaeology for the recording of features and artefacts. With models of this kind the procedure is pretty simple: you take 20 or so photos from different angles and then run them through 123D Catch to get the end result. The angles themselves generally should be every 45 degrees in a circle around the object and the same from a different hight, but because of the small scale there is quite a bit of leave way on precision of these positions.

The same cannot be said when dealing with a larger feature or an entire site, which for Photogrammetry generally refers to anything larger than 2 metres or so. In these cases it not only a question of angles and of how precise these angles are, it is also a question of making sure that every single point of the surface is recorded on at least three photographs. In smaller stuff this happens easily, as each photograph contains nearly the entirety of the feature. But on larger features the only way to achieve this is to take the images from a distance, which reduces the quality.

Many tests I have conducted have suggested that the best way to achieve a large scale model is to photograph the first spiral around the feature at a distance, in order to set the basis for the model, and then at a closer angle to  get the detail. This will increase the number of photographs needed, so the trick is to find a balance between the number of photographs and the need to photograph all points.

Image

If the model still has missing data the other approach is manual stitching. Manual stitching can be easy and straightforward or complex and problematic, so sometimes it is just easier to take the images again. If this is not possible 123D Catch does allow to glue unstiched photos together, and even to look through the photos that are already stitched to see if any mistakes have been made (this has saved me a number of times).

The main thing with large features is to try many different approaches until one works. Persistence as usual is key for great models.

Here are some examples:

https://sketchfab.com/show/gYd5v278pG0RGmle4XLTTVGOXAc

https://sketchfab.com/show/lzkuybFnqpQx6xptArTUCk0Wq2b

https://sketchfab.com/show/ghSMxAytcyxtghYhSe9Tfu68WcH

https://sketchfab.com/show/f19e2c6044b4417fbf1b8bdf9e8206eb

Image

Reconstruction of St. Mary’s Church – Caerau

Here is my first video animation of Saint Mary’s church in Caerau, Cardiff. I made the model a few months ago of this beautiful church, which unfortunately is only partially standing today. It is based on a plan of the cemetery and a number of photographs I found from when it was still complete.
Sketchup itself is an easy to use software and is perfect for reconstructing archaeological sites, especially if all that is needed is a way to show the plans in 3 dimensions. By tracing over the original drawings and pushing/pulling the surfaces you can create models of large-scale excavations in little time. It also allows to build on those plans and recreate what the site would have looked like, in order to better convey the archaeology to the general public. Some research is often needed and a little guesswork sometimes is essential, but with some knowledge of the site great models can be achieved.
This model in particular is also the first time I have worked on rendering the surfaces to make them more realistic. First of all the textures are more in detail than the standard ones, but also I have been using the Round Corners plugin by Fredo, which means there are less jiggered edges. This gives an overall more appealing feel. Finally I changed the lighting to in order to create better shadows. There is still more that can be done, but that will follow.
Finally, the animation was done by exporting the model using V-Ray, and then making them into a video using Adobe Premiere. A full guide on how to do it can be found here: http://sketchupvrayresources.blogspot.co.uk/2011/08/tutorial-vray-sketchup-animation.html (although here they use Adobe After Effects).

Working on the Sphinx in 123D Catch

The other day I was talking with one of the guys at work and while chatting about Photogrammetry an idea came to mind. There are millions of photographs online taken by tourists who are visiting archaeological sites, and famous monuments like the Parthenon or the Pyramids surely have their photograph taken many times a day. Therefore there is an enormous bank of images of archaeology available at our fingertips, and if it is possible to reconstruct features and structures using as little as 20 images, surely there must be a way to reconstruct through Photogrammetry these monuments without the need to even be there.

As a result of this pondering I decided to test this idea by choosing a building that would be simple enough to reconstruct, yet famous enough to allow many images to be found. A few months ago I had started working on the reconstruction of Stone Henge from a series of photographs I had found online, but with scarce (yet useful) results. Hence I decided to try with something solid, like the Great Sphinx.

I then proceeded to find all possible images, collecting approximately 200 of them from all angles. Surprisingly the right side of the monument had many less pictures than the rest of it. I then run the photos in different combinations in an attempt to reconstruct as accurate as a model as possible.

The following is the closest result I have had yet:

Image

Agreed, much more work is needed. But honestly I am surprised that I managed to get this far! The level of detail is quite surprising in some places, and if half of it has worked the other half should also be possible. The matter is finding the right combination of images.

So far I have tried:

  • Running only the high quality images.
  • Running only the images with camera details.
  • Running 20 photos selected based on the angles.
  • Creating the face first and then attaching the rest of the body.
  • Running only images with no camera details (to see if any results appeared, which they did).
  • Increasing the contrast in the images.
  • Sharpening the images.
  • Manually stitching the remaining images in.

All of these have produced results, but lesser than the one above. Also, I have tried running the images in Agisoft Photoscan with similarly poor results.

The model above was done by pure coincidence by uploading 100 of the images, so I’m hoping by going through the images selected I may be able to find a pattern of selection.

I shall post any updates here, so stay tuned!