Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):

 

Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1

 

Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2

 

Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2

Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.

Image

My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.

Image

In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for http://www.dayofarchaeology.com/enhancing-worn-inscriptions-and-the-day-of-archaeology-2012/

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.

ImageImage

If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

Textures in Photogrammetry: How They Can Deceive Us

One of the advantages I find in Photogrammetry is that unlike other methods such as laser scanning and regular 3D recording, the results are photorealistic. The models seem natural and precise due to the textures that are associated with the points, and aesthetically it is much more pleasing. In addition to this it is amazing for archaeological recording, as all colours and shades are preserved as they were originally.

However the more I experimented with this great technology, the more I realised that as useful as this tool is, the textures sometime give us a wrong idea of what is going on. Generally this causes no problems at all, but in some situations I have found myself trying to get things to work and failing due to a false perception I had of the model.

A good example of this are these graffiti from Cardiff castle.

Image

The model itself is not great. It was made with my Iphone camera, and I didn’t even use the maximum quality on 123D Catch, however it does help me prove the point.

I spent quite a lot of time trying to test a theory of mine, which I’ve written about before. Theoretically, using a filter or renderer on a photogrammetric model, it should be possible to emphasise inscriptions in order to make them clearer. I was originally trying the APSS filter, but I recently read articles suggesting the use of Radiance Scaling in Meshlab (I’m still experimenting with this, but results seem positive for now). Therefore I decided to test out a number of filters on the graffiti model, as the ridges of the letters appeared to have been handled very well by the program. Even when rotating the textures caused me the illusion that quite a few points had been found around each letter, points that I could manipulate to my advantage.

Having played around for some time with no real results, I decided to try removing the textures to manipulate the shadow, but having done that I noticed that the beautiful ridges that had appeared previously, disappeared when the textures were removed, like so:

ImageImage

Looking at it even better I noticed that the points that had been found were basically evenly distributed, rather than surrounding the letters as I first thought. As a result the reason the filters were not working was because there was nothing to work on.

ImageImage

So even if the textures were providing a much needed aesthetic relief, and helping with the recording, for more specific actions they only got in the way, creating illusions.

This however does not in any way limit the technology. The case shown was that of an extreme case, in which the circumstances caused a lack of points. Most models have a much larger number of points and a greater accuracy, which makes it suited for most situations. However when pushing the limits of the technology it is important to remember that in the end the model is just a collection of points, and that these are where the potential is.

Program Overview: Meshlab

Image

When writing my third year dissertation a few months ago I analysed a basic method of creating photogrammetric models using 123D Catch, and when it came to discussing the later editing of the models the program I turned to was Meshlab.

I’d originally come across this program when looking at the Ducke, Scores and Reeves 2011 article, as part of a more complex method to actually create the models, but had concluded that it would have a much better use as a separate process. This is due to the vast variety of tools and filters that the program employs, which surpassed those of any other freeware 3D program I knew. I talk in the past as I’ve since changed my mind, at least in part.

Image

But before talking about the problems, I’ll go through the advantages:

  • Easy interface: Simple operations are easy to do in Meshlab, and the navigation is efficient (the zoom is counterintuitive, still easy to adapt to it). Loading models is simple and it does allow a lot of file types to be used, plus changing between points and surfaces requires a simple button click.
  • Nice lighting: Far from complex or complete, the lighting tool is somewhat primitive, but possible for the best. Programs like Blender or Sketchup make me go insane when you just want to emphasise a certain area, while Meshalb has a light you can rotate to show certain elements. It also makes the model’s edges come to life, increasing the contrast and bringing out things hard to see otherwise. When I made a photogrammetric model of a Zeus head statue, some of the other archaeologists with me suggested by looking at the Meshlab light effect, that it may well be a Neptune head instead.
  • Alignment tool: I think this may be due to my ignorance of Blender, but I still prefer the way Meshlab glues together two models, by selecting 4 identical points between them. It adapts the size and orientation automatically, with good results.
  • Great filters: I wrote an article about using the APSS filter to bring out inscriptions, and there are many more available, with references to documentation about them. Some are a bit useless for what I’m trying to achieve, but still not bad at all.

Image

  • Free: Meshlab is entirely open source, which is great for making Photogrammetric models. Compared to program’s that are commercially available, it still does what you need it to do, without having to pay the money.

There are however some problems:

  • No undo button: Probably not the major issue, but by far the most annoying. The number of times I’ve had to start a project from the start due to a minor issue makes this a big flaw. Unless you save every time you do anything, you’ll find yourself in trouble.
  • Saving glitches: Quite a few times I’ve saved the final product, closed the program and opened it again to find it had not saved textures. This is a difficult thing to get around when we are dealing with Photogrammetric models, where the texture is essential.
  • No image files: Meshlab seems to hate all image files, which makes it difficult in some scenarios. For example it is not possible to open a models and an image in the same context. I found this frustrating when gluing together some features from then same trench, where I wanted to use the plan of the trench as a guide to the positions.
  • No real rendering: Blender is great to create good images of the objects you’ve created, while with Meshlab all you can do is use the windows Snipping Tool. Presentation wise it is deficient.
  • Inefficient tools: In some tool Meshlab excels, in other it is lacking. The measuring tool exists, but in order to set a reference distance you have to go through a series of calculations that you personally have to do. The rotate tool is odd, and the transpose tool only seems to apply to a single layer.

Overall Meshlab can get the job done, and in some things it can work better than with other programs such as Blender. However the issues that derive from the fact it is freeware makes it hard to use. I would still recommend it, but when it can be avoided it’s probably for he best.

Image

References:

Ducke, B., Score, D. and Reeves, J. (2010) Multiview 3D Reconstruction of the Archaeological Site at Weymouth from Image Series. Computers & Graphics, 35 . pp. 375-382.

Glastonbury Ware Pot – Ham Hill

This is one of the first attempts I made with Photogrammetry, and probably one of the ones I am most happy with. It is a beautiful pot found during the 2011 excavation, and that was glued together to show how it would have been prior to destruction. I made the model using around 60 images with 123D Catch, and good natural lighting that brought out the contrasts well, especially with regards to the decoration.
The thing I am extremely happy with is that I was able to create both sides, something which I was struggling to do before, and which was probably helped with the large number of images.
The animation itself was made using the tool in 123D Catch, which is ideal to display the model, although it is hard to create a non-wavy video, as this one shows. Still, in absence of suitable programs, or updated browsers for Sketchfab, it is an ideal tool, as it can be uploaded to youtube and shared with anyone interested.
As an addition though the model can also be viewed at the following link: https://sketchfab.com/show/8ABDov7xS8kV8mfbGMuQkFOmjE3

DSC_0286

Potential Method to Emphasise Inscriptions with 123D Catch?

Image

Although this requires much more work and more tests with different objects, it could be a somewhat interesting method to better distinguish inscriptions present on coins or small object.

The picture above is of a Medieval lead stopper or weight found at the site of Caerau, Cardiff, which has been made into a 3D model using 123D Catch and then run through a filter in Meshlab (all freeware).

Meshlab itself is an interesting but somewhat limited program, but the Colorize Curvature (APSS) filter seems promising, as it changes the texture colour based on the inclination of the surface. The papers regarding the filter are extremely technical, which at the moment make it difficult for me to understand the process in detail, but with an extremely high MSL – Spherical Parameter (even 10000000000) and an approximate mean Curvature Type the results above come out.

The stopper seems to bear three letters on the top (EXQ?) and maybe something at the bottom too.

At present the only other object I’ve tried is a piece of pot with decoration, but the natural curve of the object itself made it hard to achieve anything. However the way the filter works seems to suggest that there is indeed correlation between the inscription above and the filter rather than a random effect that just so happened to produce the image. Research continues…