PART 1 – Visualisation

ggantija

“Visualisation” is a term that is used quite consistently in recent archaeological publications. It refers to the reconstruction of archaeological evidence through the use of computer software, although it originates in the practise of recording the site using 2D drawings which has been around for a few centuries. Although the meaning of the word and what it entails fluctuates somewhat, I’ve come to identify three types of technologies that fall within this category:

  • Photogrammetry
  • Laser Scanning
  • 3D Reconstruction

Photogrammetry is also referred to as Structure From Motion, and differs from the other methods as the 3D models are based on photographs (Pedersini et al. 2000). Similarly to laser scanning, the result is a high density point cloud, with photorealistic textures.

Laser scanning is a powerful tool widely used in large scale recording. It uses laser measurements to calculate the position of points in a site, and like Photogrammetry it produces a textured mesh, although generally laser scanning models are much more dense and therefore more accurate.

3D reconstruction is the method we will be primarily dealing with. It is less accurate than Photogrammetry and laser scanning, and the results are less realistic. It does however possess some distinct advantages. Reconstructed models are easily manipulated, and often represent elements of a site that have been lost (Fig.1). They can also be combined with gaming software to create interactive environments (I could cite many authors here, but just as a taster I would recommend reading Champion et al. 2012).

image 1

Fig.1 = 3D Reconstruction of the site of Ggantija, Gozo.

The three methods have very different aims, and as such it is important to know what you want to achieve before applying them:

  • For small and medium scale recording Photogrammetry is excellent (Scopigno 2012). It is very cheap and fast, and possesses the accuracy and visual effects that are necessary for recording and presenting. It is ideal for cataloguing finds or small scale excavations, although it can be used for larger features if necessary (see the current Must Farm excavation models by Donald Horne for more details: https://sketchfab.com/mustfarm). The fact it possesses lower points than laser scanning makes it easy to manage, and it requires little training.
  • For detailed models and large sites Laser Scanning is the tool of choice. More expensive and computationally challenging than Photogrammetry, laser scanning creates precise models that are perfect for recording, presenting and some interpreting. A great example is the work of John Meneele (https://www.facebook.com/1manscan/) who has been analysing stone decay be comparing models taken in different years. I personally have little experience with laser scanning, but the results I have seen have shown a lot of promise.
  • 3D Reconstruction is mainly for presentation and interpretation. Although some arguments have been put forward on using 3D reconstructions for metadata recording, this is not where the technology shines (Reilly 1990; Barreau et al. 2013). 3D reconstructions can show a site “as it was” rather than “as it is”, leading to a more vivid understanding of archaeological contexts (Miller and Richards 1995; Lewin and Gross 1996). For the general public it is perfect, and it can be made highly interactive in order to further increase user comprehension of the archaeology. As for interpretation, the use of scripting allows a number of tools to be created in order to answer archaeological questions. One of the projects I have been working on was looking at analysing solar alignment at a Maltese site, and through a custom-written script I concluded the site is illuminated on the winter solstice (Fig.2). While Photogrammetry and Laser scanning shine with precision and photorealism, 3D Reconstruction truly dominates in presentation and interpretation.

 

overall image

Fig.2 – Overview of the script interface.

It is important however to mention how these methodologies are not mutually exclusive. Little work has been done in combining different techniques, but the results show much promise. A previous article on this blog discussed virtual museums, and the combination of a 3D reconstructed environment, with Photogrammetric models within.

In conclusion, there is a lot of technology put there, and although research is slowly unveiling the advantages of each there is much to be discovered still. With 3D Reconstruction we are barely scratching the surface, and only in the last 10 years we have had custom written scripts for archaeology. It may take a while, but once we uncover what is possible, archaeology will really reap the results.

 

In the next post I will be looking at previous work in 3D reconstruction, with a few examples of significant projects that have helped shape the methodology.

 

REFERENCES:

Barreau, J., Gaugne, R., Bernard, Y., Le Cloiree, G. and Gouranton, V. (2013). The West Digital Conservatory of Archaeological Heritage Project. Digital Heritage International Congress. Vol.1 pp.547-554.

Champion, E., Bishop, I. and Dave, B. (2012). The Palenque project: evaluating interaction in an online virtual archaeology site. Virtual Reality 16 pp.121-139.

Lewin, J. S. and Gross, M. D. (1996). Resolving Archaeological Site Data With 3D Computer Modeling: The Case of Ceren. Acadia pp. 255-266.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Pedersini, F., Sarti, A. and Tubaro, S. (2000). Automatic monitoring and 3d reconstruction applied to cultural heritage. Journal of Cultural Heritage 1 pp.301-313.

Reilly, P. (1990). Towards a virtual archaeology. In: Lockyear, K. and Rahtz, S. Computer Applications in Archaeology pp.133-139.

Scopigno, R. (2012). Sampled 3D models for Cultural Heritage: which uses beyond visualisation? Virtual Archaeology Review Vol.3 No.5 pp.109-115.

Photogrammetric Recording of Subvertical Pits

Image

Up to now in my blog I have been trying to outline the uses of Photogrammetry in the two main areas of archaeology, recording and interpretation. Some things I have discussed were specific to preserving as much data as possible of an archaeological feature or object, by creating a virtual copy of it. Other posts were concerned with what can be done once that model has been made, to further our understanding.

This post is mostly about recording a specific type of feature, but it opens up some possibilities to help interpret them as well.

On some occasions during archaeological excavations we happen to stumble across some particular pits that present particular difficulty when planning. In these cases the issue is that the sides of the pit are not gradual or even vertical, but they actually overcut the side, giving it a bulging shape. During an excavation at Ham Hill, Somerset, one of the pits there had this particular shape due to it’s use. It was probably used to store grain, and the presence of a smaller hole at the top meant that the preservation would have been better.

Image

The plans draw of the pit were excellent, but even so it is difficult to convey the true shape of the feature using only 2D resources. I therefore created a model of it, by taking photographs like I normally would for a regular feature, with the addition of a few more from within the feature itself, by dropping the camera within it. The results are as follow:

 

Not only can you view the feature from the top, but it is even possible to see it from the sides, and rotate it that way, making it ever so clear how the feature was even now it is gone.

Image

In addition to that, the bulge is much clearer, and it is easier to draw conclusions on its use. As a permanent record it is excellent, as not only do we not loose any information, but we can also gain more than what we could see being limited by the simple top view.

It also opens up new possibilities. As of yet I have not experimented much with Maya 3D, however I have had a brief overview of how the program works and what it is capable of. One of my colleagues once showed me how to reconstruct a pot from the profile, and then proceeded to calculate the capacity of the finished pot. Theoretically speaking it should be possible to import the finished model of the pit in Maya, and then use the same algorithm to calculate how much grain the pit could have had at a time, which could help understand the density of the population of the area, as well as a lot of other interesting questions.

And the technology doesn’t stop here. This may be a very specific example, but the same ideas can be applied to many different kinds of features. Those with particular bases can be easily recorded by making a model of them, or stone structures can be perfectly copied digitally rather than only drawn by hand. There is still a lot of applications to discover.

Image

Bigger And Better: Photogrammetry Of Buildings

Image

Photogrammetry is definitely the “new” thing in archaeology, slowly clawing its way into how we treat and record archaeological sites. As far as its potentials go though, there is still a lot of research to be done, in order to assess the uses that this technology has, but also the limits that we will have to deal with on a day to day basis.

One of the aspects that has always interested me is a question of scale. We’ve seen before that Photogrammetry deals well with anything ranging from an arrowhead to a small trench (4×6 is the maximum I have managed before, but there is much larger stuff out there). Smaller objects are hard to do cheaply, as the right lenses become essential, but larger stuff should be entirely possible to do with the right considerations in mind.

123D Catch’s website shows quite a few examples relating to Photogrammetric reconstruction of buildings, so I tried reconstructing my own, taking as a subject the front façade of Saint Paul’s cathedral in London.Given that this was mostly for experimental purposes, I just attempted a few things and went through the results looking for patterns and suggestions.

The results can be viewed here: 

Saint Paul Cathedral (click to view in 3D)

 

Saint Paul Cathedral

As we can see, the results are of course not marvellous, but I am less interested in the results than the actual process. I took 31 photographs of the building, standing in a single spot, taking as many pictures as necessary to include all parts of the façade and then moving to a slightly different angle. I tried to make sure that I got as much of the building in a single shot as possible, but the size of it and the short distance at which I was taking the photographs meant that I had to take more than one shot in some cases.

Image

The lighting was of course not something I could control, but the fact that it was late afternoon meant that it was bright enough to be  visible, yet not too light that would bland the textures and cause problems with contrast. I then used 123D Catch to process the photographs, and to my surprise all of them were included in the final mesh.

The thing that surprises me the most is that given the photographs I had, the results are actually as good as my most hopeful prediction. There is a clear issue with the top of surfaces i.e. top of ledges that come out as slopes. This is totally expected, due to the fact that usually Photogrammetry works with images taken from different heights, while in this case the height couldn’t change. This however can be solved by taking images from the surrounding buildings.

The other major problem is the columns, that are in no way column shape, and that seem to mess up the structure behind them as well. Given the complexity of the structure this is also expected. 123D Catch imagines the mesh as a single flat surface, and tries to simplify whatever it finds as much as possible. In this case the columns are not flat, so the solution 123D Catch has come up with, given the limited data, is to bring forward the murky back and treat it as if it was the space between the columns. This is due to a large lack of data. Next time the solution will be to concentrate more on these trouble areas and take more numerous and calculated shots that can aid 123D Catch. It does require some more work and some careful assessment of the situation, but it is entirely possible to achieve.

Apart from these problems, the results are very promising. More work needs to be carried out, but it shows that it is possible to reconstruct structures of a certain size, hence pushing once again the limits of Photogrammetry.

Image

Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):

 

Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1

 

Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2

 

Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2

Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.

Image

My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.

Image

In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for http://www.dayofarchaeology.com/enhancing-worn-inscriptions-and-the-day-of-archaeology-2012/

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.

ImageImage

If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

VisualSFM: Pros and Cons

Image

I’ve been working with Photogrammetry for some years now, and although I use a great variety of programs to edit the 3D models, from Mehslab to Blender, when it comes to the actual creation of the models I have only ever used 123D Catch. This is partially due to the fact I now feel very comfortable using this program, having learnt what it requires and how to achieve the perfect model, but also due to great quality and simplicity that 123D Catch offers.

Only recently have I ventured out of my comfort zone to explore other Photogrammetry programs, to see if any can compare or even replace 123D Catch.

I had a spell with Agisoft Photoscan, with interesting but limited results, so now I have turned my attention to another piece of freeware called VisualSFM. At present I have only tried out a few things, so my opinions will certainly change in the future, but here are the pros and cons I have found up to now (compared to 123D Catch):

PROS

  • It’s multiplatform: only a few weeks ago another user commented on one of my posts asking me for alternatives to 123D Catch as he couldn’t use it on his Mac. VisualSFM is not restricted to Windows/Ipad App, but supports all major systems, making it a great tool for all those that don’t have a PC but still want to try out Photogrammetry.
  • It allows control of the process: one of the good things of 123D Catch is that it is easy to use, but this is to the expense of more expert users. Uploading images and getting results with a single click is great, but it’s difficult to understand what is actually happening in between. VisualSFM instead guides you through all the steps, so if anything goes wrong you can pinpoint the problem, or you can understand which photos are better for research purposes.

Image

  • It works offline: many times I have found myself slowed down by a weak internet connection. With VisualSFM this is not necessary, which also means I can create models on site without the need of Wifi. Makes the whole process more efficient and means I spend less of my free time working on models.
  • The camera placement is more accurate: this one is still in testing, but up to now I have had no problems with cameras being placed in locations they are not meant to be in. With 123D Catch often a single photo stitches to the wrong place and causes the entire model to malfunction. With VisualSFM this doesn’t appear to be the case.

CONS

  • Less points: I compared a few models made with the same photographs by the two programs. The results suggest that VIsualSFM points may be placed more accurately, there is far less of them, making the overall model itself less accurate. In the pictures top is VisualSFM, bottom is 123D Catch.

Image

Image

  • Still haven’t finished a model: once the point cloud is created, VisualSFM has finished its job and it becomes Meshlab’s responsibility to actually recreate the surfaces and reattach the textures. Up to now I have not managed to do this. I’ve talked about Meshlab before, but in short it crashes and malfunctions all the time. It took me days to recreate the surface the first time as the program refused to do it, and attaching the texture is still something I can’t seem to manage.
  • Needs user control: with 123D Catch you can leave the program running and return to a finished product. With VisualSFM you have to constantly interact with the program, meaning you can’t multitask.

Overall, it’s got potential. It’s not going to replace 123D Catch any time soon, but I am still going to try different things out to see how it reacts and find any advantages. For a full description of how to create models using VisualSFM visit here: http://www.academia.edu/3649828/Generating_a_Photogrammetric_model_using_VisualSFM_and_post-processing_with_Meshlab

Image

Photogrammetric Model Made With Iphone 4s

Sheep 1

I’ve experimented before with using my Iphone to create Photogrammetric models (not through the app, just taking the photos and running it through the Windows version of 123D Catch), with interesting but not perfect results. The other day however I found myself with a nice complete in situ sheep skeleton and no camera, so I took the opportunity to test the technology once again.

I took 49 photos with a very good uniform shade, going round the skeleton at first and then concentrating on tricky parts, like the head or the ribs. I then run it through 123D Catch and found that almost all of them had been stitched. I think the lighting really did the trick, as it created a really nice contrast between the bones and the ground, The photos were taken just as the sun had set, so it was still very light, but with no glare.

sheep 5 sheep 4

The skeleton itself looks extremely good compared to some of my earlier tests. It can be viewed here in rotatable 3D: https://sketchfab.com/show/b0ef1638d4714fcdab59c040cdb46923
I particularly like the relatively sharp edges that I really couldn’t achieve with the other models, and by looking at the cloud point I found it to be quite accurate regardless of textures. In addition to that it’s coped excellently with the rib that pokes out of the ground and the pelvis, both of which I was absolutely sure it would have a problem with. Overall I’d say the model was nearly as good as some of the models I have done with a standard camera, and I think the potential is definitely there.
The only issue I have with using the Iphone camera is that it’s still an unreliable method. I tried replicating the results today as it had been cleaned better, but the new model is more blurry, again probably due to slightly less ideal lighting conditions. Therefore I would still use my camera as much as possible, and save the Iphone for those situations in which I find myself unprepared.

sheep 2

A Theoretical Approach to Photorealistic 3D Video: the Future of Film and Gaming?

Generally I am not a big fan of theoretical issues, especially when it comes to something as practical and visual as 3D modelling. This however is something I cannot really experiment with practically (or within an acceptable time frame), so for now it has to remain in the grounds of theory. It is also not strictly archaeological, although I’m sure some applications must come from it.

What I mean with Photorealistic 3D Video is taking the still photogrammetric models you’ve seen before and applying movement to it. This can be done by using either stop motion or a combination of cameras to acquire the original footage, then modelling individual frames and putting the frames in a sequence.

Image

The acquisition of footage depends on what you are trying to achieve. If the object is still itself and it’s going to be moved frame by frame (like traditional stop motion animation) then a single camera is required. Instead of taking a single photo of the scene, like in standard animation, a series of images would be taken to make a Photogrammetric model, as explained previously on this blog. If instead the object is already in motion, as for example an individual acting out a scene, the trick is to use a large number of video-cameras that surround the object, using the same positions described before (a good example of this can be seen here http://www.webpronews.com/get-a-3d-print-of-yourself-in-texas-2013-08). With all the cameras then each frame is isolated as if photographs of the scene were taken at the same time from all angles. With either method, the results should be a large number of frames (24 for second or more depending on format), each of which is made up by 20 to 30 images.

The second step is the creation of the models. This can be done using 123D Catch or any other Photogrammetric software. Each series of images constituting a frame are hence transformed into a single rotatable 3D model.

Then all the frame models are run through other software, at present I am leaning towards gaming software, but video editing software or animation software may be more suitable, so possible options are Unity 3D, Maya or After Effects. Some alignment will have to take place, but by superimposing the models on top of each other and making it so that a single one is visible at a time should create an animation effect, again like stop motion. This is the part I am most unsure about, due to it being quite a demanding task, which may be far too complex for computers at this time. Still, in the future it should be more than possible.

Image

At this stage the result is a series of still models that run through giving the illusion of movement. This can then be combined with technology that is now appearing on the market. In particular it could be used with the Oculus Rift that is soon going to revolutionise how 3D gaming works. By tracking the position of the user, it would be possible to not only see the Photorealistic Video, but also move around it as if it were real. By combining more than one model an entire scene could be created, meaning 3D films in which the user is actually present within the film can become a possibility, as well as uber-realistic videogames.

Image

Textures in Photogrammetry: How They Can Deceive Us

One of the advantages I find in Photogrammetry is that unlike other methods such as laser scanning and regular 3D recording, the results are photorealistic. The models seem natural and precise due to the textures that are associated with the points, and aesthetically it is much more pleasing. In addition to this it is amazing for archaeological recording, as all colours and shades are preserved as they were originally.

However the more I experimented with this great technology, the more I realised that as useful as this tool is, the textures sometime give us a wrong idea of what is going on. Generally this causes no problems at all, but in some situations I have found myself trying to get things to work and failing due to a false perception I had of the model.

A good example of this are these graffiti from Cardiff castle.

Image

The model itself is not great. It was made with my Iphone camera, and I didn’t even use the maximum quality on 123D Catch, however it does help me prove the point.

I spent quite a lot of time trying to test a theory of mine, which I’ve written about before. Theoretically, using a filter or renderer on a photogrammetric model, it should be possible to emphasise inscriptions in order to make them clearer. I was originally trying the APSS filter, but I recently read articles suggesting the use of Radiance Scaling in Meshlab (I’m still experimenting with this, but results seem positive for now). Therefore I decided to test out a number of filters on the graffiti model, as the ridges of the letters appeared to have been handled very well by the program. Even when rotating the textures caused me the illusion that quite a few points had been found around each letter, points that I could manipulate to my advantage.

Having played around for some time with no real results, I decided to try removing the textures to manipulate the shadow, but having done that I noticed that the beautiful ridges that had appeared previously, disappeared when the textures were removed, like so:

ImageImage

Looking at it even better I noticed that the points that had been found were basically evenly distributed, rather than surrounding the letters as I first thought. As a result the reason the filters were not working was because there was nothing to work on.

ImageImage

So even if the textures were providing a much needed aesthetic relief, and helping with the recording, for more specific actions they only got in the way, creating illusions.

This however does not in any way limit the technology. The case shown was that of an extreme case, in which the circumstances caused a lack of points. Most models have a much larger number of points and a greater accuracy, which makes it suited for most situations. However when pushing the limits of the technology it is important to remember that in the end the model is just a collection of points, and that these are where the potential is.

First Photogrammetry Article Published

New Photogrammetry Article

article

I’m very glad to present you with my first (but not last) published article on the topic of Photogrammetry in Archaeology! The December edition of The Post Hole, that has recently been released, features a paper on “The use of Photogrammetric models for the recording of archaeological features”, which I wrote during the summer, and which I’m sure you will find of some interest.

It deals specifically with archaeoloogical features on site, and it looks at accuracy, methodology and uses, especially when it comes to recording. The aim of it is to show that far from being technology for technology’s sake, Photogrammetry can contribute greatly to our understanding of an archaeological site, as well as reinforce and improve traditional methods of recording such as section drawing and plans.

The article is based on a few sites I worked on, and that have featured on this website before, such as Ham Hill and Caerau.

This is however just scratching the surface of a technology that is now appearing more and more frequently in publications, and that will eventually become a fundamental part of archaeological recording.