Bigger And Better: Photogrammetry Of Buildings

Image

Photogrammetry is definitely the “new” thing in archaeology, slowly clawing its way into how we treat and record archaeological sites. As far as its potentials go though, there is still a lot of research to be done, in order to assess the uses that this technology has, but also the limits that we will have to deal with on a day to day basis.

One of the aspects that has always interested me is a question of scale. We’ve seen before that Photogrammetry deals well with anything ranging from an arrowhead to a small trench (4×6 is the maximum I have managed before, but there is much larger stuff out there). Smaller objects are hard to do cheaply, as the right lenses become essential, but larger stuff should be entirely possible to do with the right considerations in mind.

123D Catch’s website shows quite a few examples relating to Photogrammetric reconstruction of buildings, so I tried reconstructing my own, taking as a subject the front façade of Saint Paul’s cathedral in London.Given that this was mostly for experimental purposes, I just attempted a few things and went through the results looking for patterns and suggestions.

The results can be viewed here: 

Saint Paul Cathedral (click to view in 3D)

 

Saint Paul Cathedral

As we can see, the results are of course not marvellous, but I am less interested in the results than the actual process. I took 31 photographs of the building, standing in a single spot, taking as many pictures as necessary to include all parts of the façade and then moving to a slightly different angle. I tried to make sure that I got as much of the building in a single shot as possible, but the size of it and the short distance at which I was taking the photographs meant that I had to take more than one shot in some cases.

Image

The lighting was of course not something I could control, but the fact that it was late afternoon meant that it was bright enough to be  visible, yet not too light that would bland the textures and cause problems with contrast. I then used 123D Catch to process the photographs, and to my surprise all of them were included in the final mesh.

The thing that surprises me the most is that given the photographs I had, the results are actually as good as my most hopeful prediction. There is a clear issue with the top of surfaces i.e. top of ledges that come out as slopes. This is totally expected, due to the fact that usually Photogrammetry works with images taken from different heights, while in this case the height couldn’t change. This however can be solved by taking images from the surrounding buildings.

The other major problem is the columns, that are in no way column shape, and that seem to mess up the structure behind them as well. Given the complexity of the structure this is also expected. 123D Catch imagines the mesh as a single flat surface, and tries to simplify whatever it finds as much as possible. In this case the columns are not flat, so the solution 123D Catch has come up with, given the limited data, is to bring forward the murky back and treat it as if it was the space between the columns. This is due to a large lack of data. Next time the solution will be to concentrate more on these trouble areas and take more numerous and calculated shots that can aid 123D Catch. It does require some more work and some careful assessment of the situation, but it is entirely possible to achieve.

Apart from these problems, the results are very promising. More work needs to be carried out, but it shows that it is possible to reconstruct structures of a certain size, hence pushing once again the limits of Photogrammetry.

Image

Advertisements

Roman Villa Reconstructed In 3D

Based on the plan of an actual Roman Villa, this is a fly through of the model. It’s a way to explore this living area and get a more authentic feel of what it would have been like to actually live in the Roman times.
The model was made using Google Sketchup, and the final project sees furniture and details added in to make it even more realistic. This however is the building at present, showing how archaeology can be brought to life using 3D modelling software.
A more detailed account on how this model was made can be found previously on this website.

Roman Villa Reconstruction Preview

Image

I have talked endlessly before on this blog about the use of Google Sketchup in the archaeological world, so pardon yet another example on the topic. I recently started recreating a typical Roman Villa using plans from a number of sites and any source of information I could find. The final plan is to not only create the structure itself, but also include many more details, such as furniture, statues, etc.

Having completed the main structure I thought I would share the results as they stand, as a sort of preview to the completed work, and explain some of the aspects of making the model. In the next couple of days I’ll also post a fly-through video which is currently rendering, to give an even better impression.

Image

This model was an interesting one to make, as it was more complex in some aspects than the ones I did before, and it combined opened and closed spaces, with equal importance given to both. The plans I found were very good for the ground floor, which is pretty accurate, but for the top floor there is a definite lack of information, mainly due to the lack of archaeological evidence. Therefore I had to resort to sketch reconstructions which are based on personal interpretation, which I am not usually fond of. Similarly the roof and the inside of the rooms is mostly conjecture on my part, based however on ideas found in texts. Overall then the model is much more interpretive than for example the Parthenon model I made, but at the same time it is more useful as the Parthenon is actually standing, while the villa is not.

Image

Something I noticed from making this model is the efficiency with which Sketchup deals with lighting. In the past I wasn’t a big fan of the lighting conditions as I found that inside spaces were too dark, and outside spaces were too bright, however in this case I find that this is in no way an issue, possibly because we have both inside and outside. The rooms are still a bit dark, but with the addition of external windows that I’m adding in the next phase, they should be quite faithful to reality, while the internal courtyards are bright, but not unnaturally so. As a whole the results are quite satisfying, and when objects are placed within the model they will also look realistic due to this.

ImageAlso , rounded edges tool is still a favourite of mine, but I now use it less frequently. In large models some walls look more realistic with rounded edges, but not everything does. Door frames for example look equally good without, and given that it effectively adds many more lines to the model, there is really no need to round them off. For walls, I found that adding a slight slope at the bottom really makes it less blocky and much nicer to the eye. On a more practical note, creating components is still the greatest tip I can give with Sketchup. I found that making each floor and roof a separate entity made it much easier to edit, as you could hide upper floor when having to edit the lower one, and vice versa.

Image

As mentioned before, as soon as the animation finishes rendering I shall post a new update. I realise that recently I have been posting less and less, but I assure you it is only for practical reasons. I am currently involved in the writing of an archaeological based radio show, which is taking up a lot of my spare time, as well as working on a number of sites. Also these models do take their time to be made, so I’d rather wait a bit and publish something good rather than many very random posts. Finally a few of the projects I have been working on have the disadvantage that I can’t actually publish any of the results, which means there are a few things that I am doing that I can’t write about specifically. Therefore I apologise if sometimes it takes a bit longer to post something new.

Image

Sketchup for Archaeology – Olynthus House

house shading

Having talked for the last few days about Sketchup and its uses in Archaeology, I thought I’d complete this line of enquiry by showing you another model I made during my second year and briefly presented before, a house from the Classical Greece site of Olynthus.

Much like the houses of Zagora I covered before, the house at Olynthus is a great example of domestic space in the classical world, with an inside courtyard and different rooms of which for the larger part we know the function. The reason or Olynthus in particular is that in this case the houses have all the elements of houses in the Classical period, and the base plan is repeated throughout the entire town.

House 6

The main reason I chose this model was that it was a challenge. Previously I had only reconstructed the Parthenon, so I was not entirely used to closed spaces. In addition, I had the actual site reports handy, which meant I could reconstruct the house reliably, with little left to imagination. Finally, it gave me a chance to investigate issues of lighting within closed spaces, and settle a debate that I’d read about, regarding a possible need of a flue to provide lighting to some of the rooms.

house9

Apart from the use of components, that I’ve discussed already, I found two interesting things with this model: the use of visualisation to understand the use of space, and the aforementioned lighting tool.

One of the main issues I was having with the house was the presence of a ladder, which would have allowed transit to the upper stories. The location I originally intended for it didn’t actually fit, something that I only realised when looking at an initial draft of the model. It was too steep, and if it extended any further to reduce this it would have blocked one of the doors. Therefore I decided that the location had to be wrong, and tried many different positions that could be possible. The one I finally settled with was the only one that “looked” right, and after reading the report again it turned out there was a base for the ladder in some of the houses in that exact spot.

house3

This is probably insignificant on the long term, but it made me think that the only way I realised the position was wrong was with the added dimension, as the 2D plan didn’t give me sufficient information to realise.

The issue with lighting was part of a debate I was reading, about an area of the house interpreted as a flue. Some suggested this area was open at the top, in order to allow lighting, while others thought that the lighting in the room was sufficient to carry out basic activities. I therefore created an entire street by repeating the house, and placed windows as suggested by the report. I then rendered the images with and without a hole in the ceiling.

dark oikoslight room 2

The results are not of the most conclusive, although there is a difference between the two rooms. This does however suggest that a hole in the roof would have not been sufficient, so it is possible the flue was used to conduct smoke from a fire within the room.  Again, in this particular instance the results are not ideal, but in other models the idea may have more success, especially in enclosed spaces.

Sketchup and Archaeology – Iron Age Roundhouse

Image

One of the projects I’ve been working on has been reconstructing a roundhouse we found at Caerau, Cardiff, using Sketchup to create the main frame, and V-Ray to render it as an image. This will then be used to fade over some footage of the archaeological site, to show a transition between what we can see and what would have been present on site.

For this purpose I created a very basic roundhouse model, coupled with a fortification mound and a sheer drop behind it. All of this was based on the GIS data I had of the site, so it does represent what we actually found on site.

The model itself is pretty simple, but it gave me a chance to play around with a few different elements of Google Sketchup, which will then be useful for more complex models. In particular I was looking at the application customised textures, the creation of backgrounds, and the use of rounded corners to create realistic mounds.

Customised textures is one of the main points of 3D modelling, and is extremely useful to give the model a more realistic feel. The problem is that it is really hard for someone as artistically challenged as myself to create good textures, so I resorted to using a combination of pre-made ones instead. I used Photoshop to layer three different grass materials, and I rendered patches of each more opaque, to create many different coloured patches. This made the ground surface on the model much more realistic, as it’s  not a repeating pattern any more, but a more complex and varied one, with different shades in different areas.

I then realised that one of the elements that really made the model lack realism was the background. The Sketchup standard rendering mode is to create a background that looks like the sky, which with particular angles is fine, but that becomes a problem if we want to get specific images, like in this case.

Image

Therefore at first I decided to create a large cylinder shape around the entire model, and then paint it using a stock image of a panorama found online. This failed as the cylinder is actually made of many different faces, each of which started the texture from the origin point, thus making it repeat. Therefore I decided to use a flat surface instead, creating a sort of shield I could move around where needed, and that could be placed in the background of the rendered image much like a green screen in video editing.

Image

Finally, The mound itself looked extremely blocky, as Sketchup is not ideal when it comes to rounded surfaces. I tried using the Rounded Angle Plugin that I’ve mentioned before, and made the area of impact quite large. The result was exactly what I wanted, as it created a much more realistic mound, although it’s not ideal for the base, which can be a problem from some angles.

Overall I think these three tips are really useful, and I shall be using them from now onwards to create better large scale models, and especially to simplify and improve the rendering process.

Image

Creating a 3D Model of the Town of Zagora with Sketchup

Image

Sorry it took me so long to write a new post, I’ve been swamped with work the last few days. I am however back and will be resuming my daily posting. Today I want to show you a few models I created earlier on this year for one of my essays, regarding the town of Zagora, in Greece.

The essay itself was about the evolution of housing in Greece during the Archaic Period, and the town of Zagora was particularly important due to the distribution of space and especially open areas within the houses, which become a prominent feature of Classical Greece houses. The town was made of original smaller houses that were later expanded, creating agglomerated areas, with many houses sharing common walls with other houses. The exact details of the process, and the conclusions we could deduce from this have currently been placed in a location within my mind that I cannot reach, and the original essay is similarly lost within my laptop, but the main idea that got me to create a model of the entire town was that of space visualization.

Image

3D modelling of structures is entirely about bringing spaces to life, in order to learn from them in a much more efficient way than in 2D. A plan of a town is great to find patterns of activity, but to get an actual idea of how the space was arranged, a model is much more efficient. So in order to really show what on paper was simply a theory I decided to recreate the town from the plan, and also to concentrate on a few of the houses for better examples. These houses were made first without the later additions, and then with them, to show how the creation of open courtyards would have made it easier to carry out activities, as well as giving a more private feel to the environment.

Image

The models themselves were easy to make using Google Sketchup, and given that this was more interpretive than for presentation, I was able to create it in around 4 hours, showing that good results are obtainable with little effort. Had I had more time I could have used the Rounded Corners tools, and added more detail within the structures, as well as making the outsides more realistic with better textures.

In addition to the large scale reconstruction, which was a first, I also learnt a lot in these models about component placing within Sketchup. If a certain feature of a model is something you believe you may use in the future, it is worth saving it as a component. This can then be uploaded in another model to save time, and very complex objects can be avoided. In this case I used a roof I had already created to cut on the creation time, as well as small figurines I found online to show the scale of the buildings in the essay.

Image

Overall these models also show how Sketchup can be a real help to display visual elements to enhance the understanding of certain concepts.

Image

The Winged Victory of Samothrace: Analysis of the Images

This is a continuation of my blog from yesterday, so I suggest you read that first. One of the things I’ve been working on the past month is creating a photogrammetric model of monuments using nothing but tourists’ photographs. After many attempts my last test, using the Winged Victory of Samothrace as a model, seemed to have sufficiently good results, so much that I decided to analyse the image data to pinpoint what kind of photographs actually stitch together in 123D Catch, and which ones give problems. This way we can actually choose a limited amount of good photographs, rather than a vast number of mixed ones.

Image

In order to understand the patterns behind the stitching mechanism, I created an Excel file in which I included all images with certain details: width and height, if the background is consistent with the majority of the photographs, if the colour is consistent, if the lighting is the same, the file size, and the position of the object. The last one is based on the idea that to make 3d models we need series of shots from 8 positions, each at 45 degrees angle from one another. If we are thinking of it like a compass, with North corresponding to the back of the object, position 1 is South, 2 is SW, 3 is W, and so on up to 8 is SE.

The first thing I noticed, which ties in with what said yesterday, is the lack of photographs in positions 4 and 6 (NW and NE), which of course meant that all the images from the back (position 5) also had trouble stitching. Therefore the first problem for the model is the need of enough images from all angles, without which it is inevitable to have missing parts. This is made harder by the fact that these are typically positions that are not photographed.

Having concluded that images from position 4-5-6 had this reason for not stitching, I removed them from the list so the data would be more accurate for the others.

I then averaged the height, width and file size of the stitched images and those of the unstitched images and then compared them. The former had an average height of 1205.31, a width of 906.57 and a file size of 526.26, while the latter had a height of 929.07, a width of 668.57 and a file size of 452.57. The differences here are enough to suggest that large good quality images have a higher chance of being stitched. This may seem obvious, but actually some images that have not stitched are larger and higher quality than some which have, so this can’t be the only factor. Also, the difference isn’t large enough to suggest it is even a key factor.

Image

Image

The next step was analysing the percentage of stitched images that had abnormal background, colour and lighting to the unstitched ones, which had even more limited results. In the former, the background was not constant in 15% of the images, the colour in 63% and the lighting in 47%, while in the latter background was 0%, colour was 50% and lighting 50%. Somehow these results show that the photographs have a higher chance of being included if they are different from the rest, which goes against all the knowledge we have up to now of 123D Catch. I therefore suspect that the program allows a good tolerance when it comes to these elements, and that I may have been to harsh in defining the differences in these elements.

Having concluded little with this methodology I decided to look at each individual unstitched image to see what elements could be responsible. This proved to be much more successful. I managed to outline three categories of unstitched images which explain every single one of the photographs, without any overlap with the stitched ones:

  • Distant shots: Some photographs were taken at a further distance from the statue than others, and while a certain degree of variation is acceptable, in these it was too extreme.

Image

  • Excessive shadows: Although as we’ve seen before the lighting didn’t appear to be a factor, some of the images had an extreme contrast, with very unnatural light. They were practically at the edge of the scale, and while some variation is acceptable, these were well beyond that.

Image

  • Background: this is an interesting case in which the background is not different  rom the rest, but has a very similar colour to the rest of the object. Because of the similarities it is difficult for the program to recognise any edges, which makes it impossible to be stitched correctly.Image

Therefore creating 3d models from tourists’ photographs is entirely possible, as long as we have sufficient angles, photographs from a similar distance, without harsh lighting and with a contrasting background.

The Winged Victory of Samothrace Reconstructed Digitally Using Tourists’ Photographs.

Image

If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.

The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.

In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.

Image

The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.

The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3

Image

As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.

Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.

Image

Reconstructing Mount Rushmore from Tourists’ Photographs

Image

One of my first posts here was about the creation of 3D photorealistic models using tourists’ photographs, the main aim of which is the possibility of recreating monuments that were around since the invention of photography and that have since been destroyed or damaged. Also, it is a way of testing the size limits of the technology, which previous work has pushed as far as 10 m by 10 m with relative accuracy using aerial photographs. In the other post I attempted to create a model of the Sphinx, using images I had found online with a simple Google search. I had used more than 200 images, with some interesting results.

Although far from complete I managed to create slightly less than half of the Sphinx, which however allowed me to draw some conclusions. The main one was that the technology is indeed capable of making large scale models from tourists’ photographs, the only issue is understanding what combination of images allows this. The images that had been stitched together (only 10% of the total) didn’t seem to follow a pattern, and a series of limitations produced no improvement.

Hence I decided to move on to another model to understand what combination of photographs is needed. I therefore chose around 50 photographs of the Statue of Liberty from all angles and attempted to run them, with only 5 creating a somewhat unsuccessful mesh. Given this small number it was hard to draw any conclusions, other the suspicion that more photos can improve the quality of the model. One of the main issues I had though was actually finding photographs of all the statue. While many images of the front were available, the sides and back were practically un-photographed, and the lower part of the body was entirely ignored in some angles.

Given this I moved onto another monument, Mount Rushmore. The reason for choosing this was that given it is only 180 degrees it requires less angles to photograph. Also, the aforementioned work with aerial photography seemed to suggest that the combination of the depth of perspective of Mount Rushmore and the fact parts of it don’t cover other areas, all that would be needed is a slight variation from a single position (like in stereophotography). I selected 40 images from google search, most of which appear to be taken by the same viewing point.

Image

Similarly with the Sphinx, the results are not amazing, but they do narrow down the process. The mesh looks great from around the theoretical viewing point, but the move you rotate it the more it becomes warped and distorted, especially the faces further away. The lack of multiple images from other angles are clearly taking their toll, as the program is not managing to create the sides which become stretched. Anything that can be seen from the viewing point appears to be accurate, but the rest is much approximated. Therefore the idea of a single angle with variations has to be discarded.

Image

The good think though is that only 7 images were unstitched, while 4 were stitched incorrectly. The only reason I can see of this is the great similarity of the stitched images, which have relatively consistent lighting and colours. This therefore may be the key to the entire process, which explains why the Sphinx was harder to achieve, due to a great difference in the photographs’ colour and light. The images that were placed wrongly were taken from a greater distance than the others, which may suggest a relatively consistent distance may be another factor, and the main reason for the failing of the Statue of Liberty.

Finally the fact this was done with only 40 images, over the 200 of the Sphinx shows that it is not the number that affects the model but it’s the actual image quality and consistency.

The results of course are not acceptable for either of the aims of this theory, and even though I am slowly narrowing down the ideas to make this work it still requires many more tests. The next step is the creation of an Access Database to try and narrow down the key elements that may affect the results.

Image