Bigger And Better: Photogrammetry Of Buildings

Image

Photogrammetry is definitely the “new” thing in archaeology, slowly clawing its way into how we treat and record archaeological sites. As far as its potentials go though, there is still a lot of research to be done, in order to assess the uses that this technology has, but also the limits that we will have to deal with on a day to day basis.

One of the aspects that has always interested me is a question of scale. We’ve seen before that Photogrammetry deals well with anything ranging from an arrowhead to a small trench (4×6 is the maximum I have managed before, but there is much larger stuff out there). Smaller objects are hard to do cheaply, as the right lenses become essential, but larger stuff should be entirely possible to do with the right considerations in mind.

123D Catch’s website shows quite a few examples relating to Photogrammetric reconstruction of buildings, so I tried reconstructing my own, taking as a subject the front façade of Saint Paul’s cathedral in London.Given that this was mostly for experimental purposes, I just attempted a few things and went through the results looking for patterns and suggestions.

The results can be viewed here: 

Saint Paul Cathedral (click to view in 3D)

 

Saint Paul Cathedral

As we can see, the results are of course not marvellous, but I am less interested in the results than the actual process. I took 31 photographs of the building, standing in a single spot, taking as many pictures as necessary to include all parts of the façade and then moving to a slightly different angle. I tried to make sure that I got as much of the building in a single shot as possible, but the size of it and the short distance at which I was taking the photographs meant that I had to take more than one shot in some cases.

Image

The lighting was of course not something I could control, but the fact that it was late afternoon meant that it was bright enough to be  visible, yet not too light that would bland the textures and cause problems with contrast. I then used 123D Catch to process the photographs, and to my surprise all of them were included in the final mesh.

The thing that surprises me the most is that given the photographs I had, the results are actually as good as my most hopeful prediction. There is a clear issue with the top of surfaces i.e. top of ledges that come out as slopes. This is totally expected, due to the fact that usually Photogrammetry works with images taken from different heights, while in this case the height couldn’t change. This however can be solved by taking images from the surrounding buildings.

The other major problem is the columns, that are in no way column shape, and that seem to mess up the structure behind them as well. Given the complexity of the structure this is also expected. 123D Catch imagines the mesh as a single flat surface, and tries to simplify whatever it finds as much as possible. In this case the columns are not flat, so the solution 123D Catch has come up with, given the limited data, is to bring forward the murky back and treat it as if it was the space between the columns. This is due to a large lack of data. Next time the solution will be to concentrate more on these trouble areas and take more numerous and calculated shots that can aid 123D Catch. It does require some more work and some careful assessment of the situation, but it is entirely possible to achieve.

Apart from these problems, the results are very promising. More work needs to be carried out, but it shows that it is possible to reconstruct structures of a certain size, hence pushing once again the limits of Photogrammetry.

Image

Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):

 

Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1

 

Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2

 

Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2

Photogrammetric Model Made With Iphone 4s

Sheep 1

I’ve experimented before with using my Iphone to create Photogrammetric models (not through the app, just taking the photos and running it through the Windows version of 123D Catch), with interesting but not perfect results. The other day however I found myself with a nice complete in situ sheep skeleton and no camera, so I took the opportunity to test the technology once again.

I took 49 photos with a very good uniform shade, going round the skeleton at first and then concentrating on tricky parts, like the head or the ribs. I then run it through 123D Catch and found that almost all of them had been stitched. I think the lighting really did the trick, as it created a really nice contrast between the bones and the ground, The photos were taken just as the sun had set, so it was still very light, but with no glare.

sheep 5 sheep 4

The skeleton itself looks extremely good compared to some of my earlier tests. It can be viewed here in rotatable 3D: https://sketchfab.com/show/b0ef1638d4714fcdab59c040cdb46923
I particularly like the relatively sharp edges that I really couldn’t achieve with the other models, and by looking at the cloud point I found it to be quite accurate regardless of textures. In addition to that it’s coped excellently with the rib that pokes out of the ground and the pelvis, both of which I was absolutely sure it would have a problem with. Overall I’d say the model was nearly as good as some of the models I have done with a standard camera, and I think the potential is definitely there.
The only issue I have with using the Iphone camera is that it’s still an unreliable method. I tried replicating the results today as it had been cleaned better, but the new model is more blurry, again probably due to slightly less ideal lighting conditions. Therefore I would still use my camera as much as possible, and save the Iphone for those situations in which I find myself unprepared.

sheep 2

Ham Hill Iron Age Skeletons Turn Digital

Image

Three of the skeletons found at the site of Ham Hill, Somerset during the 2013 excavation are now available to view online at the following links:

https://sketchfab.com/show/70d864c4736435710bc65b6f21d81c03

https://sketchfab.com/show/821565c7ce0b98e1b7764c73a9f07492

https://sketchfab.com/show/fa694aff0fb5949e2f396a5fb2da37b0

The skeletons were discovered during this year’s excavation carried out by the Cambridge Archaeological Unit and Cardiff University, at the site of an important Iron Age hill fort. They are only some of the many human remains found, some of which were carefully placed within a number of pits, while others were located symbolically within an enclosure ditch, often with body parts missing.

Image

The models themselves were made using Photogrammetry and specifically 123D Catch, which required very little time for quite good quality. The aim of this was to preserve a record of the skeletons in situ for further interpretation once they had been removed from the location they were discovered in.

Given the complexity of the subject, the results seem to be very promising. Time was a big factor when making the models, as they had to be removed before damage occurred. Some of them were also in tight spots, which made it hard to access some of the standard angles, but overall this was counterbalanced by taking a larger number of photographs than normally (around 30 per skeleton). The lighting conditions also proved to be ideal, as there was an overcast sky, but also sufficient sunlight coming through to really emphasise the surfaces.

Image

For further information on the skeletons I suggest reading the article at http://www.bbc.co.uk/news/uk-england-somerset-23928612

The Winged Victory of Samothrace Reconstructed Digitally Using Tourists’ Photographs.

Image

If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.

The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.

In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.

Image

The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.

The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3

Image

As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.

Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.

Image

Glastonbury Ware Pot – Ham Hill

This is one of the first attempts I made with Photogrammetry, and probably one of the ones I am most happy with. It is a beautiful pot found during the 2011 excavation, and that was glued together to show how it would have been prior to destruction. I made the model using around 60 images with 123D Catch, and good natural lighting that brought out the contrasts well, especially with regards to the decoration.
The thing I am extremely happy with is that I was able to create both sides, something which I was struggling to do before, and which was probably helped with the large number of images.
The animation itself was made using the tool in 123D Catch, which is ideal to display the model, although it is hard to create a non-wavy video, as this one shows. Still, in absence of suitable programs, or updated browsers for Sketchfab, it is an ideal tool, as it can be uploaded to youtube and shared with anyone interested.
As an addition though the model can also be viewed at the following link: https://sketchfab.com/show/8ABDov7xS8kV8mfbGMuQkFOmjE3

DSC_0286

Using Photogrammetry with Archaeological Archives: Must Farm

Image

About a year ago I volunteered at the Cambridge Archaeological Unit for three weeks, during which I had some time to carry out some experiments with Photogrammetry with the help of some of the people there. One of the projects I carried out involved using the photograph archive they had to create 3D models with 123D Catch, to see if it was possible to create models from photographs not taken with this purpose in mind.

Looking through the archive one site in particular caught my eye, as the photographs perfect for this use: Must Farm, Cambridgeshire. The site itself is extremely interesting, and has won a Site of the Year award for the level of preservation. It is mostly a Bronze Age site, and throughout the years it was excavated it revealed a series of intact wooden boats, as well as a living structure which had collapsed into the river, waterlogging the timber frames and all that was within, including weirs and pots with food residue. For more information visit the official website http://www.mustfarm.com/.

Image

The photographs I found were from the 2006 excavation, and they consisted of series of shots from similar angles of same objects. The number of images per feature was around 8, depending on what it was. The most common things photographed were waterlogged wood beams, pottery spreads, sections and weirs.

Generally, the number of images and the fact they were all taken from a very similar angle would mean making a model is impossible. But through different test I have found that there is an exception to this rule when the object is particularly flat. If you are photographing a wall, there is no need to go round it to create a model, all you need is to take the images from the front and then change the angle slightly. The idea is also at the root of Stereophotography, in which two images at slightly different angles give the illusion to our eyes that they are in 3D. Similarly modern 3D films use a similar idea, with incredible results.

Image

Running the images through 123D Catch provided the proof of this theory, as in out of 40 or so potential models, around 70% were extremely good. The models had all the detail of a 3d model made with intentional photographs. Some details could have been a bit better, like some models of timber sticking out of the ground, for which only the front is available as would be expected, and the side of protruding objects which are blurry, but overall the results are amazing for what they are.

Most of the models are now available at http://robbarratt.sketchfab.me/folders/must-farm

Image

If we consider the amount of archaeological photographs that are taken at every site, surely amongst them there is enough to create at least some fantastic models.

Google Sketchup and Archaeology: Reconstructing the Parthenon

Parthenon Rendered 6

As part of my second year studying archaeology in Cardiff, I was required to write 5000 words on a topic of my choosing for a project called an Independent Study. Having only recently completed the first two models I ever made for a documentary on the medieval site of Cosmeston, South Wales, I decided it would be a great idea to further investigate this aspect of archaeology. I therefore decided to write about the use of new media in archaeology, and look into 3d modelling, photogrammetry and documentary making.

As I wanted to try out these new programs as much as possible I decided to test myself with something large and complex, yet well-known enough to allow me access to a large database of plans, images and measurements. For this reason I chose to reconstruct the Parthenon, using a great program called Google Sketchup.

Image

Google Sketchup is still my number one choice when dealing with reconstruction from plans, due to simplicity of the program and the still great results that can achieved. For less experienced users the simple mechanism of pushing and pulling surfaces is ideal, and it’s easy to pick up how it all works thanks to the user-friendly interface.

The great advantage of reconstructing the Parthenon was the fact that I could copy and paste most of the features, which meant I didn’t have to create every single column and brick. But it also meant that I had to quickly learn one of the key elements of sketchup, which is making components. By making components you don’t end up with thousands of lines, each a separate entity, but with a series of elements that work as a whole, meaning you can easily select them and copy/paste, move or rotate them. It also means they don’t pull everything they are connected to when you place them somewhere else. Hence I quickly learnt that having a series of lines to select and copy each time is much less efficient than having the same lines as a separate entity “roof_tile” which you can copy with two clicks. The great advantage of this I also learnt when making my second model, a Greek house, for which I decided to simply import the Parthenon roof component and edit it to make it smaller, rather than make the roof from scratch.

Image

The second thing I soon found out about the Parthenon is that it’s a large thing made of small things. For example, the ceiling wasn’t a big mass of stone, but more likely a decorated series of boxes and beams, which I couldn’t for the life of me create while I had the rest of the walls in place. Hence I started doing the sneaky thing of editing parts of the Parthenon in a work area far away from the actual model, then make them into a component, move them into position and scale them to fit. The result was as good as if done in the right position, but with so many less issues.

Image

Thirdly, experimenting. This is what makes you a good 3d modeller, if it’s archaeological or not. It’s all good following a set of guidelines, but what happens when you rotate a corner, or draw a square on a brick and pull it out? 9 out of 10 times I tried something new I ruined the Parthenon, I then pressed ctrl+z or reopened the save file and then tried something new. 1 out of 10 times I discovered a new and amazing thing, which would save me hours of work or make the model better. The more curious you are with 3d modelling, the more you learn.

Finally, reconstructing something the size of the Parthenon showed me that archaeology and 3d modelling really work hand in hand. If a teacher is explaining to his class how the Parthenon looked like, why not show it in 3d? If we are debating about what it would look like when it was painted, using the paint tool in Sketchup instantly shows the results. If we are thinking of lighting/darkness levels within the inner chamber V-ray for Sketchup allows us to try it out any day of the year or time of the day.

Therefore, here is the link to the finished model (even though Sketchfab does it no justice at all): https://sketchfab.com/show/4EKCWxne5OUE4rBj2QRJbsHNj0L

Image

And if you are interested in starting modelling, but are not sure about it, download Sketchup and give it a go! I assure you it is easy and entertaining, and you’ll learn modelling in no time.

Modelling Large Scale Features with 123D Catch

Image

In the previous entries we have seen the use of Photogrammetry in archaeology for the recording of features and artefacts. With models of this kind the procedure is pretty simple: you take 20 or so photos from different angles and then run them through 123D Catch to get the end result. The angles themselves generally should be every 45 degrees in a circle around the object and the same from a different hight, but because of the small scale there is quite a bit of leave way on precision of these positions.

The same cannot be said when dealing with a larger feature or an entire site, which for Photogrammetry generally refers to anything larger than 2 metres or so. In these cases it not only a question of angles and of how precise these angles are, it is also a question of making sure that every single point of the surface is recorded on at least three photographs. In smaller stuff this happens easily, as each photograph contains nearly the entirety of the feature. But on larger features the only way to achieve this is to take the images from a distance, which reduces the quality.

Many tests I have conducted have suggested that the best way to achieve a large scale model is to photograph the first spiral around the feature at a distance, in order to set the basis for the model, and then at a closer angle to  get the detail. This will increase the number of photographs needed, so the trick is to find a balance between the number of photographs and the need to photograph all points.

Image

If the model still has missing data the other approach is manual stitching. Manual stitching can be easy and straightforward or complex and problematic, so sometimes it is just easier to take the images again. If this is not possible 123D Catch does allow to glue unstiched photos together, and even to look through the photos that are already stitched to see if any mistakes have been made (this has saved me a number of times).

The main thing with large features is to try many different approaches until one works. Persistence as usual is key for great models.

Here are some examples:

https://sketchfab.com/show/gYd5v278pG0RGmle4XLTTVGOXAc

https://sketchfab.com/show/lzkuybFnqpQx6xptArTUCk0Wq2b

https://sketchfab.com/show/ghSMxAytcyxtghYhSe9Tfu68WcH

https://sketchfab.com/show/f19e2c6044b4417fbf1b8bdf9e8206eb

Image

Using 123D Catch to Record in Situ Finds

Image

If you have the pleasure of excavating a collection of finds rather than an individual one often one of the problems you will encounter is how to record it in situ. Some finds are delicate and may well break up once removed from the soil, while others may have been placed in a specific way and as such their location is important. Traditional methods require planning of the finds, but Photogrammetry may offer an alternative to this, by creating simple models and allowing an accurate record of the finds as well as the surroundings.

It is a non-destructive process, and has the advantage of being quick and efficient. In addition many shots can be taken at different times to show the entire excavation process.

Here are some examples from Must Farm (2006) that were originally photographed without Photogrammetry in mind:

Image

Image

The links for these models can be found here:

https://sketchfab.com/show/gvQKa4WkUPOTaegyWcOUMZQlOSH

https://sketchfab.com/show/cf038e102fcb484aa71818a46be6a4ec

https://sketchfab.com/show/vun3L1S0bb1tl0uEC4ilV9Gpmhk