Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):

 

Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1

 

Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2

 

Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2

Advertisements

Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.

Image

My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.

Image

In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for http://www.dayofarchaeology.com/enhancing-worn-inscriptions-and-the-day-of-archaeology-2012/

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.

ImageImage

If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

Textures in Photogrammetry: How They Can Deceive Us

One of the advantages I find in Photogrammetry is that unlike other methods such as laser scanning and regular 3D recording, the results are photorealistic. The models seem natural and precise due to the textures that are associated with the points, and aesthetically it is much more pleasing. In addition to this it is amazing for archaeological recording, as all colours and shades are preserved as they were originally.

However the more I experimented with this great technology, the more I realised that as useful as this tool is, the textures sometime give us a wrong idea of what is going on. Generally this causes no problems at all, but in some situations I have found myself trying to get things to work and failing due to a false perception I had of the model.

A good example of this are these graffiti from Cardiff castle.

Image

The model itself is not great. It was made with my Iphone camera, and I didn’t even use the maximum quality on 123D Catch, however it does help me prove the point.

I spent quite a lot of time trying to test a theory of mine, which I’ve written about before. Theoretically, using a filter or renderer on a photogrammetric model, it should be possible to emphasise inscriptions in order to make them clearer. I was originally trying the APSS filter, but I recently read articles suggesting the use of Radiance Scaling in Meshlab (I’m still experimenting with this, but results seem positive for now). Therefore I decided to test out a number of filters on the graffiti model, as the ridges of the letters appeared to have been handled very well by the program. Even when rotating the textures caused me the illusion that quite a few points had been found around each letter, points that I could manipulate to my advantage.

Having played around for some time with no real results, I decided to try removing the textures to manipulate the shadow, but having done that I noticed that the beautiful ridges that had appeared previously, disappeared when the textures were removed, like so:

ImageImage

Looking at it even better I noticed that the points that had been found were basically evenly distributed, rather than surrounding the letters as I first thought. As a result the reason the filters were not working was because there was nothing to work on.

ImageImage

So even if the textures were providing a much needed aesthetic relief, and helping with the recording, for more specific actions they only got in the way, creating illusions.

This however does not in any way limit the technology. The case shown was that of an extreme case, in which the circumstances caused a lack of points. Most models have a much larger number of points and a greater accuracy, which makes it suited for most situations. However when pushing the limits of the technology it is important to remember that in the end the model is just a collection of points, and that these are where the potential is.

Viewing Photogrammetric Models on iPhone/iPad

Image

Since very recent I used to look at the iPhone and the iPad with a pinch of scepticism, as I believed it to be simply a less powerful laptop, mainly used for games and the occasional note taking. I’ve always been a Windows user, but last month I was given an old iPhone and I’m becoming more and more convinced of the effectiveness of the Apple products, especially with regards to 3D modelling and Photogrammetry.

The first thing I tried out was the 123D Catch app, which I am very pleased with. Unfortunately I don’t have 3G, so the great advantage of being able to create models wherever I am is lost on me. Still, regardless of personal use, it is truly a plus. Also, the camera itself is good enough to get the level of detail necessary.

The one thing though that got me thinking was the possibility of carrying with me my collection of models, so I have something to show when talking to people about Photogrammetry. In the last two months many times I’ve had to bring my laptop on site to show some results, and every time I risked it getting broken. As my phone is much easier to protect I realised that if I could get my models on my phone, it could save me a lot of money for a new laptop.

Therefore I started looking through all the different types of apps available, both free and commercial. Out of all the ones I found, the one I’m most pleased with is Meshlab for iOS, which is derived from the Meshlab I use on PC.

Image

Yesterday I went through the main flaws of the PC version, but the app is actually the best there is. It allows you to open Obj files with textures using mail or Dropbox, by placing them in a .zip archive, and then it views them in a typical Meshlab environment. The texture support is a deal breaker, as it’s what is giving me a lot of problems in other apps. Also, the navigation tools are easy and intuitive, and you can change the lighting with a single tap, highlighting certain areas. Finally, it doesn’t require an internet connection, which is ideal for my iPhone.

The only disadvantages I can see is that is does crash when opening large files, which is rarely a problem, but annoying in some cases, and the fact that the contrast is too high. The shadows it creates makes the models seem less natural than they should be, and there is no way to remove them. Although not really a major issue, it does make the models lose a bit. I’m guessing following updates will make this function better. Finally there is no way to sort files in folders, which could be difficult if you have many different models.

I shall continue investigating apps and see what I can find. By the looks of it there is a lot of potential for 3D modelling and archaeology awaiting me.

Program Overview: Meshlab

Image

When writing my third year dissertation a few months ago I analysed a basic method of creating photogrammetric models using 123D Catch, and when it came to discussing the later editing of the models the program I turned to was Meshlab.

I’d originally come across this program when looking at the Ducke, Scores and Reeves 2011 article, as part of a more complex method to actually create the models, but had concluded that it would have a much better use as a separate process. This is due to the vast variety of tools and filters that the program employs, which surpassed those of any other freeware 3D program I knew. I talk in the past as I’ve since changed my mind, at least in part.

Image

But before talking about the problems, I’ll go through the advantages:

  • Easy interface: Simple operations are easy to do in Meshlab, and the navigation is efficient (the zoom is counterintuitive, still easy to adapt to it). Loading models is simple and it does allow a lot of file types to be used, plus changing between points and surfaces requires a simple button click.
  • Nice lighting: Far from complex or complete, the lighting tool is somewhat primitive, but possible for the best. Programs like Blender or Sketchup make me go insane when you just want to emphasise a certain area, while Meshalb has a light you can rotate to show certain elements. It also makes the model’s edges come to life, increasing the contrast and bringing out things hard to see otherwise. When I made a photogrammetric model of a Zeus head statue, some of the other archaeologists with me suggested by looking at the Meshlab light effect, that it may well be a Neptune head instead.
  • Alignment tool: I think this may be due to my ignorance of Blender, but I still prefer the way Meshlab glues together two models, by selecting 4 identical points between them. It adapts the size and orientation automatically, with good results.
  • Great filters: I wrote an article about using the APSS filter to bring out inscriptions, and there are many more available, with references to documentation about them. Some are a bit useless for what I’m trying to achieve, but still not bad at all.

Image

  • Free: Meshlab is entirely open source, which is great for making Photogrammetric models. Compared to program’s that are commercially available, it still does what you need it to do, without having to pay the money.

There are however some problems:

  • No undo button: Probably not the major issue, but by far the most annoying. The number of times I’ve had to start a project from the start due to a minor issue makes this a big flaw. Unless you save every time you do anything, you’ll find yourself in trouble.
  • Saving glitches: Quite a few times I’ve saved the final product, closed the program and opened it again to find it had not saved textures. This is a difficult thing to get around when we are dealing with Photogrammetric models, where the texture is essential.
  • No image files: Meshlab seems to hate all image files, which makes it difficult in some scenarios. For example it is not possible to open a models and an image in the same context. I found this frustrating when gluing together some features from then same trench, where I wanted to use the plan of the trench as a guide to the positions.
  • No real rendering: Blender is great to create good images of the objects you’ve created, while with Meshlab all you can do is use the windows Snipping Tool. Presentation wise it is deficient.
  • Inefficient tools: In some tool Meshlab excels, in other it is lacking. The measuring tool exists, but in order to set a reference distance you have to go through a series of calculations that you personally have to do. The rotate tool is odd, and the transpose tool only seems to apply to a single layer.

Overall Meshlab can get the job done, and in some things it can work better than with other programs such as Blender. However the issues that derive from the fact it is freeware makes it hard to use. I would still recommend it, but when it can be avoided it’s probably for he best.

Image

References:

Ducke, B., Score, D. and Reeves, J. (2010) Multiview 3D Reconstruction of the Archaeological Site at Weymouth from Image Series. Computers & Graphics, 35 . pp. 375-382.

8 Reason Why We Should Be Using Photogrammetry in Archaeology

arrow head 1

If you are an archaeologist you should be using Photogrammetry because:

  1. It is easy to use: Unless you are dealing with something extremely large or extremely complex, Photogrammetry has an extremely high success rate. When it was still based on camera calibration, complex calculations and precise measuring was necessary, but with more modern programs often all that is needed is to take the photos and upload them. Decent models are easy to produce, and more complex ones are achievable without issue with some experience. Overall, anyone could potentially use it in small scale archaeology with no experience, and on large scale with limited training.
  2. It is quick: With a good internet connection I can probably model a single feature in under 10 minutes. And by single feature I mean anything from a posthole to a stone spread. In situ finds could be recorded in no time, cutting back on the need to plan everything by hand. A complex stone wall could be preserved for the archaeological record simply by taking a few dozen photographs, and sections can be recorded with much more realism than any hand drawn plan can achieve. A rough sketch of the section would of course help the interpretation, but the measuring time would considerably go down, as it would be possible to measure on the model using Meshlab.
  3. It is practical: Laser scanning is the current fashion in archaeology, but the problem with laser scanning is that you need to provide expensive equipment, you need to carry that equipment around and need to train specific people to use the machines and the software. Photogrammetry requires nothing more than a camera and a laptop, which are usually much more accessible on site. If a delicate object is found, that may not survive excavation, it is much easier to take some photographs with the site camera, to then edit later, than to bring in the equipment to laser scan it.
  4. It is accurate: As shown in one of my recent posts, the accuracy of 123D Catch is extremely good for the type of process. Although it cannot compete with that provided by laser scanning, an error margin of less than 1% means that any task required for interpretation can be carried out without having to worry of the results. The level of accuracy is ideal for presentation, for interpretation and for recording.
  5. It is photorealistic: No other technology gives you the photorealism that can be achieved by Photogrammetry. Due to the fact that at the base of the models we have the photos, and that the finished product contains .mtl files that record the exact position of the photographs, the surfaces of the features can be recorded as they are in real life. The models seem realistic because they are not a simple collection of points, but a combination of points and images.Image
  6. It is entertaining: Archaeology is not simply about recording the past, it is also about getting the information out there, to the general public. It is important that anyone interested in an archaeological has the opportunity to learn about the site itself. Academic texts are amazing when carrying out research, but for the average archaeological enthusiast, who lives in a now mostly digital world, texts can be seen as confusing. Photogrammetry provides a visual component to the archaeological record, making it possible for people to see from their own living room the archaeology, as if they were actually present at the site.
  7. It is constantly improving: At the moment there are some problems and flaws with the programs that may cause concerns to more traditional archaeologists. These problems however are only temporary. With such a great interest in the digital world, teams of developers are constantly trying to update and improve all software, and if at present programs like 123D Catch are not perfect, they can only get better. Also, 10 years ago this level of accuracy in Photogrammetry was unheard of, yet today it has got to this point. In another 10 years how much will the programs change for the better?
  8. It is not as simple as it looks, in a good way: There are different levels to Photogrammetry. The basic level is the simple recording of features and artefacts for the only reasons of recording and presenting. There is however a second level, which uses the models created to analyse archaeology, like I show in my previous post about finding inscriptions in coins. There is a third level, which alters the way the programs are used, by changing a part of the process to get greater results. An example is the attempt I did on reconstructing the Sphinx using tourists’ photographs, or the idea of using series of photographs in archaeological archives to reconstruct features long gone. Finally the fourth level is the more interesting one. It’s using many Photogrammetric models to create a single model, i.e. recreating pots by putting fragments together digitally or entire sites by gluing together individual features. So it is not only pretty models of features, it is much more.Image

Accuracy of 123D Catch

I always go on about how Photogrammetry should be used to record everything from small finds to entire sites, but just how accurate are these models? Are they good enough only for recording the objects as nice images or can they actually be used to gain more archaeological information? In essence, is it technology for technology’s sake or is there more to it?

In order to answer this I photographed three different objects and made models out of them. I then proceeded to measure the objects, and by using a set distance in Meshlab I then measured the same distances in the models. Finally I compared the two sets of data in order to see what the results suggested.

Image

Here are the results:

Iron Buckle

Pot Fragment

Arrow Head

Object

 Model

Object

Model

Object

Model

80*

80

74*

74

76*

76

49

49

61

61

81

80

49

50

86

86

54

54

25

26

81

82

63

63

34

34.5

72

73

64

64

33

33

Error

0.6%

Error

0.6%

78

79

Error

1%

The asterisk indicates the measurement I used as a reference and all the numbers are in mm.

Overall the results suggest a maximum error of 1%, which considering the size of the objects is more than acceptable. With this data it seems that analysis of the surfaces can be done without false results appearing, and as such it does seem like Photogrammetry can have many more uses than simply pretty pictures.

A more careful analysis of accuracy on a larger scale has still to be done, although Chandler and Fryer (2011) may be of some help: http://homepages.lboro.ac.uk/~cvjhc/otherfiles/accuracy%20of%20123dcatch.htm

Still, the results seem to greatly favour this technology for the purpose of archaeological recording.

Potential Method to Emphasise Inscriptions with 123D Catch?

Image

Although this requires much more work and more tests with different objects, it could be a somewhat interesting method to better distinguish inscriptions present on coins or small object.

The picture above is of a Medieval lead stopper or weight found at the site of Caerau, Cardiff, which has been made into a 3D model using 123D Catch and then run through a filter in Meshlab (all freeware).

Meshlab itself is an interesting but somewhat limited program, but the Colorize Curvature (APSS) filter seems promising, as it changes the texture colour based on the inclination of the surface. The papers regarding the filter are extremely technical, which at the moment make it difficult for me to understand the process in detail, but with an extremely high MSL – Spherical Parameter (even 10000000000) and an approximate mean Curvature Type the results above come out.

The stopper seems to bear three letters on the top (EXQ?) and maybe something at the bottom too.

At present the only other object I’ve tried is a piece of pot with decoration, but the natural curve of the object itself made it hard to achieve anything. However the way the filter works seems to suggest that there is indeed correlation between the inscription above and the filter rather than a random effect that just so happened to produce the image. Research continues…