Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.

Image

My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.

Image

In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for http://www.dayofarchaeology.com/enhancing-worn-inscriptions-and-the-day-of-archaeology-2012/

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.

ImageImage

If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

Roman Villa Reconstructed In 3D

Based on the plan of an actual Roman Villa, this is a fly through of the model. It’s a way to explore this living area and get a more authentic feel of what it would have been like to actually live in the Roman times.
The model was made using Google Sketchup, and the final project sees furniture and details added in to make it even more realistic. This however is the building at present, showing how archaeology can be brought to life using 3D modelling software.
A more detailed account on how this model was made can be found previously on this website.

Ham Hill Iron Age Skeletons Turn Digital

Image

Three of the skeletons found at the site of Ham Hill, Somerset during the 2013 excavation are now available to view online at the following links:

https://sketchfab.com/show/70d864c4736435710bc65b6f21d81c03

https://sketchfab.com/show/821565c7ce0b98e1b7764c73a9f07492

https://sketchfab.com/show/fa694aff0fb5949e2f396a5fb2da37b0

The skeletons were discovered during this year’s excavation carried out by the Cambridge Archaeological Unit and Cardiff University, at the site of an important Iron Age hill fort. They are only some of the many human remains found, some of which were carefully placed within a number of pits, while others were located symbolically within an enclosure ditch, often with body parts missing.

Image

The models themselves were made using Photogrammetry and specifically 123D Catch, which required very little time for quite good quality. The aim of this was to preserve a record of the skeletons in situ for further interpretation once they had been removed from the location they were discovered in.

Given the complexity of the subject, the results seem to be very promising. Time was a big factor when making the models, as they had to be removed before damage occurred. Some of them were also in tight spots, which made it hard to access some of the standard angles, but overall this was counterbalanced by taking a larger number of photographs than normally (around 30 per skeleton). The lighting conditions also proved to be ideal, as there was an overcast sky, but also sufficient sunlight coming through to really emphasise the surfaces.

Image

For further information on the skeletons I suggest reading the article at http://www.bbc.co.uk/news/uk-england-somerset-23928612

Virtual Museums: Combining 3D Modelling, Photogrammetry and Gaming Software

I wrote the post below yesterday night, but since it was written I’ve managed to create at least a part of what is described in the text, which is shown in the video above. Hence keep in mind that the rest of the post may be slightly different from what is in the video.

One of the more popular posts I’ve published seems to be the one about public engagement at Caerau, South Wales, in which I created an online gallery with the clay “Celtic” heads school children made. The main concept that I was analysing in the text was the idea that we could create digital galleries in which to display artefacts,

When I wrote the word gallery I imagined the computer definition of gallery, as in a collection of images (or in this case models) within a single folder. However I have since found this: http://3dstellwerk.com/frontend/index.php?uid=8&gid=18&owner=Galerie+Queen+Anne&title=1965%2C85%C2%B0C

This is an example of what the website http://3dstellwerk.com offers, an opportunity for artists the create a virtual space in which to display their work. It allows users to go “walk” through the gallery and view the 2d artwork as if it were an actual exhibition. Although the navigation may require a little improvement, it is a brilliant idea to make art more accessible to people.

Virtual Museum

This idea however could easily be adapted for archaeology, using Photogrammetry, Making models of a selection of artefacts using 123D Catch, we can then place them within a virtual space created with our 3D software of choice, in order to then animate it using gaming software such as Unity 3D which would allow user interaction. A large scale project could even allow the objects to be clicked in order to display additional information, or create audio to go with each artefact. Video clips could also be incorporated within the virtual space.

Virtual Museum 2

On an even larger scale this could mean we can create online museums available to all and with specific goals in mind. As we are talking of digital copies of objects, it would be possible to group in a single virtual space a number of significant objects without having to physically remove them from their original location.

The only problem that we may encounter with this idea is file size, as each photogrammetric model is relatively small and manageable, yet if we want a decent sized virtual museum we are going to need a large portion of data. Still, even if the technology at present is not quite capable of dealing with the bulk, the rate at which it is improving will allow such ideas to be doable in the near future.

Virtual Museum 3

The Winged Victory of Samothrace: Analysis of the Images

This is a continuation of my blog from yesterday, so I suggest you read that first. One of the things I’ve been working on the past month is creating a photogrammetric model of monuments using nothing but tourists’ photographs. After many attempts my last test, using the Winged Victory of Samothrace as a model, seemed to have sufficiently good results, so much that I decided to analyse the image data to pinpoint what kind of photographs actually stitch together in 123D Catch, and which ones give problems. This way we can actually choose a limited amount of good photographs, rather than a vast number of mixed ones.

Image

In order to understand the patterns behind the stitching mechanism, I created an Excel file in which I included all images with certain details: width and height, if the background is consistent with the majority of the photographs, if the colour is consistent, if the lighting is the same, the file size, and the position of the object. The last one is based on the idea that to make 3d models we need series of shots from 8 positions, each at 45 degrees angle from one another. If we are thinking of it like a compass, with North corresponding to the back of the object, position 1 is South, 2 is SW, 3 is W, and so on up to 8 is SE.

The first thing I noticed, which ties in with what said yesterday, is the lack of photographs in positions 4 and 6 (NW and NE), which of course meant that all the images from the back (position 5) also had trouble stitching. Therefore the first problem for the model is the need of enough images from all angles, without which it is inevitable to have missing parts. This is made harder by the fact that these are typically positions that are not photographed.

Having concluded that images from position 4-5-6 had this reason for not stitching, I removed them from the list so the data would be more accurate for the others.

I then averaged the height, width and file size of the stitched images and those of the unstitched images and then compared them. The former had an average height of 1205.31, a width of 906.57 and a file size of 526.26, while the latter had a height of 929.07, a width of 668.57 and a file size of 452.57. The differences here are enough to suggest that large good quality images have a higher chance of being stitched. This may seem obvious, but actually some images that have not stitched are larger and higher quality than some which have, so this can’t be the only factor. Also, the difference isn’t large enough to suggest it is even a key factor.

Image

Image

The next step was analysing the percentage of stitched images that had abnormal background, colour and lighting to the unstitched ones, which had even more limited results. In the former, the background was not constant in 15% of the images, the colour in 63% and the lighting in 47%, while in the latter background was 0%, colour was 50% and lighting 50%. Somehow these results show that the photographs have a higher chance of being included if they are different from the rest, which goes against all the knowledge we have up to now of 123D Catch. I therefore suspect that the program allows a good tolerance when it comes to these elements, and that I may have been to harsh in defining the differences in these elements.

Having concluded little with this methodology I decided to look at each individual unstitched image to see what elements could be responsible. This proved to be much more successful. I managed to outline three categories of unstitched images which explain every single one of the photographs, without any overlap with the stitched ones:

  • Distant shots: Some photographs were taken at a further distance from the statue than others, and while a certain degree of variation is acceptable, in these it was too extreme.

Image

  • Excessive shadows: Although as we’ve seen before the lighting didn’t appear to be a factor, some of the images had an extreme contrast, with very unnatural light. They were practically at the edge of the scale, and while some variation is acceptable, these were well beyond that.

Image

  • Background: this is an interesting case in which the background is not different  rom the rest, but has a very similar colour to the rest of the object. Because of the similarities it is difficult for the program to recognise any edges, which makes it impossible to be stitched correctly.Image

Therefore creating 3d models from tourists’ photographs is entirely possible, as long as we have sufficient angles, photographs from a similar distance, without harsh lighting and with a contrasting background.

Google Sketchup and Archaeology: Reconstructing the Parthenon

Parthenon Rendered 6

As part of my second year studying archaeology in Cardiff, I was required to write 5000 words on a topic of my choosing for a project called an Independent Study. Having only recently completed the first two models I ever made for a documentary on the medieval site of Cosmeston, South Wales, I decided it would be a great idea to further investigate this aspect of archaeology. I therefore decided to write about the use of new media in archaeology, and look into 3d modelling, photogrammetry and documentary making.

As I wanted to try out these new programs as much as possible I decided to test myself with something large and complex, yet well-known enough to allow me access to a large database of plans, images and measurements. For this reason I chose to reconstruct the Parthenon, using a great program called Google Sketchup.

Image

Google Sketchup is still my number one choice when dealing with reconstruction from plans, due to simplicity of the program and the still great results that can achieved. For less experienced users the simple mechanism of pushing and pulling surfaces is ideal, and it’s easy to pick up how it all works thanks to the user-friendly interface.

The great advantage of reconstructing the Parthenon was the fact that I could copy and paste most of the features, which meant I didn’t have to create every single column and brick. But it also meant that I had to quickly learn one of the key elements of sketchup, which is making components. By making components you don’t end up with thousands of lines, each a separate entity, but with a series of elements that work as a whole, meaning you can easily select them and copy/paste, move or rotate them. It also means they don’t pull everything they are connected to when you place them somewhere else. Hence I quickly learnt that having a series of lines to select and copy each time is much less efficient than having the same lines as a separate entity “roof_tile” which you can copy with two clicks. The great advantage of this I also learnt when making my second model, a Greek house, for which I decided to simply import the Parthenon roof component and edit it to make it smaller, rather than make the roof from scratch.

Image

The second thing I soon found out about the Parthenon is that it’s a large thing made of small things. For example, the ceiling wasn’t a big mass of stone, but more likely a decorated series of boxes and beams, which I couldn’t for the life of me create while I had the rest of the walls in place. Hence I started doing the sneaky thing of editing parts of the Parthenon in a work area far away from the actual model, then make them into a component, move them into position and scale them to fit. The result was as good as if done in the right position, but with so many less issues.

Image

Thirdly, experimenting. This is what makes you a good 3d modeller, if it’s archaeological or not. It’s all good following a set of guidelines, but what happens when you rotate a corner, or draw a square on a brick and pull it out? 9 out of 10 times I tried something new I ruined the Parthenon, I then pressed ctrl+z or reopened the save file and then tried something new. 1 out of 10 times I discovered a new and amazing thing, which would save me hours of work or make the model better. The more curious you are with 3d modelling, the more you learn.

Finally, reconstructing something the size of the Parthenon showed me that archaeology and 3d modelling really work hand in hand. If a teacher is explaining to his class how the Parthenon looked like, why not show it in 3d? If we are debating about what it would look like when it was painted, using the paint tool in Sketchup instantly shows the results. If we are thinking of lighting/darkness levels within the inner chamber V-ray for Sketchup allows us to try it out any day of the year or time of the day.

Therefore, here is the link to the finished model (even though Sketchfab does it no justice at all): https://sketchfab.com/show/4EKCWxne5OUE4rBj2QRJbsHNj0L

Image

And if you are interested in starting modelling, but are not sure about it, download Sketchup and give it a go! I assure you it is easy and entertaining, and you’ll learn modelling in no time.

Photogrammetric Pottery Reassembling (PPR) Preview

Image

This is part of a little project I’ve been working on, and which I’m hoping to write an article about soon. What you see here is a pot which was originally in fragments, and that I’ve reassembled virtually, in order to avoid having to glue it together.

I used 123D Catch to digitise the pieces of pot, and Blender to reassemble them and correct some of the textures.

The most fragments I’ve been able to put together at the moment is 9, but there is certainly the potential for making much larger and more complex pots come back to life.

The aim of this method is to preserve the pottery as much as possible as well as provide an online copy of the object that is visible to the general public, thus making archaeology even more accessible to all.

Image