Part 3 – Sources and Paradata

Before going into the bulk of how to model an archaeological site and why do it, I would like to spend a moment discussing the research that should be at the basis of the model itself. The fact that 3D Reconstruction is in its infancy brings many advantages and disadvantages to the table. On the one part, it is exciting to think there is so much we do not know as it means endless applications are there just waiting to be discovered. On the other hand however, there is a distinct lack of consistent methodology between projects and while some publication are clearly founded on extensive research (Dawson et al. 2011 amongst many others), others seem to be more loosely interpreted.


Fig.1 – The first steps in modelling, based on a plan of the site to scale.

This is one of the reasons behind ‘paradata’, a term that has recently been applied to the field. To understand paradata we need to first discuss metadata. Especially in computer science, many authors have lamented the inability to replicate experiments involving code (Hafer and Kirkpatrick 2009; Boon 2009; Ducke 2012; Hayashi 2012). In the words of Marwick:

“This ability to reproduce the results of other researchers is a core tenet of scientific method, and when reproductions are successful, our field advances (Marwick 2016, pp.1).”


“A study is reproducible if there is a specific set of computational functions/analyses (usually specified in terms of code) that exactly reproduce all of the numbers and data visualizations in a published paper from raw data (Marwick 2016, pp.4).”

Essentially the debate is that publishing results is not enough, but that instead we should include additional information, such as settings used in a program or the raw code. This collection of information is referred to as ‘metadata’. Some authors on the other hand have taken it a step forward, arguing that we should include descriptions of the process, a discussion of the choices made and the probabilities (Denard 2009; Beacham 2011, D’Andrea and Fernie 2013). This ‘paradata’ is best described in the London Charter, which is the first attempt to creating a methodology in 3D Reconstruction:

“Documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of computer-based visualisation should be disseminated in such a way that the relationship between research sources, implicit knowledge, explicit reasoning, and visualisation-based outcomes can be understood (Denard 2009, pp.8-9).”

Given that a major critique in 3D Reconstruction is accuracy (Miller and Richards 1995; Richards 1998; Devlin et al. 2003; Johnson et al. 2009), paradata is our way to defend ourselves. While it is impossible to create the perfect model, by demonstrating the process behind the reconstruction allows a user to understand the interpretation given and draw their own conclusions.

Roman Villa 4

Fig.2 – A highly speculative Roman Villa. Without knowledge of the process it is impossible to know how accurate each element is.

One of the elements we have mentioned previously are sources. While the process itself has to be methodical in order to gain accurate results, the sources provide the wireframe upon which the interpretation can take place. It therefore essential that the sources are well researched and well documented. For this purpose I like the classification proposed by Dell’Unto et al. (2011), which sees different categories based on accuracy:

  • Reconstruction by Objectivity: sources based on in situ elements, like plans, 3D scans, archives.
  • Reconstruction by Testimony: illustrations, literary sources, notes.
  • Reconstruction by Deduction: elements that can be deduced from in situ remains, but that are not actually there.
  • Reconstruction by Comparison: based on other sites, this is actually quite an important one as a lot of features carry on between remains of the same regions.
  • Reconstruction by Analogy of Styles: especially for decoration, looking at other stylistic elements that have been preserved can help make the whole model look more realistic.
  • Reconstruction by Hypothesis: an essential part of reconstruction, but the most inaccurate.

Of course, the more we go down this ladder, the more inaccurate the sources are. Yet it is by combining all the different sources that we get the finished model. Paradata can help with determining which sources were used for each part of the model, and therefore provide information of the model as a whole.


Fig 3 = Partly completed model of a Greek house, based on plan, excavation reports and site comparison.

In conclusion, there are many sources that can be used when constructing a model and although some are more precise than others, all of them contribute to the final result. If they are applied methodologically and the process is recorded, we can provide an accurate and reliable reconstruction.

Over the next posts I will start looking at SketchUp for modelling, although the ideas will carry over to other software such as 3Ds Max.


Beacham, R. C. (2011). Concerning the Paradox of Paradata. Or, “I don’t want realism; I want magic!”. Virtual Archaeology Review Vol.2 No.4 pp.49-52.

Boon, P., Van Der Maaten, L., Paijmans, H., Postma, E. and Lange, G. (2009). Digital Support for Archaeology. Interdisciplinary Science Reviews 34:2-3 pp.189-205.

D’Andrea, A. and Fernie, K. (2013). CARARE 2.0: a metadata schema for 3D Cultural Objects. Digital Heritage International Congress Vol.2 pp.137-143.

Dawson, P., Levy, R. and Lyons, N. (2011). “Breaking the fourth wall”: 3D virtual worlds as tools for knowledge repatriation in archaeology. Journal of Social Archaeology 11(3) pp.387-402.

Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.

Denard, H. (2009). The London Charter: for the computer-based visualisation of cultural heritage.

Devlin, K., Chalmers, A. and Brown, D. (2003). Predictive lighting and perception in archaeological representation. UNESCO World Heritage in the Digital Age.

Ducke, B. (2012). Natives of a connected world: free and open source software in archaeology. World Archaeology 44:4 pp.571-579.

Hafer, L. and Kirkpatrick, A. E. (2009). Assessing Open Source Software as a Scholarly Contribution. Communication of the ACM Vol.52 No.12 pp.126-129.

Hayashi, T. (2012). Source Code Publishing on World Wide Web. International Conference on Advanced Information Networking and Applications Workshops pp.35-40.

Johnson, D. S. (2009). Testing Geometric Authenticity: Standards, Methods, and Criteria for Evaluating the Accuracy and Completeness of Archaeometric Computer Reconstructions. Visual Resources 25:4 pp.333-344.

Marwick, B. (2016). Computational Reproducibility in Archaeological Research: Basic Principles and a Case Study of Their Implementation. Journal of Archaeological Method and Theory pp.1-27.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Richards, J. D. (1998). Recent Trends in Computer Applications in Archaeology. Journal of Archaeological Research Vol.6 No.4 pp.331-382.

Part 2 – 3D Reconstruction Literature

In this section I would like to go through some projects I have been reading about that I think are very useful for understanding 3D Reconstruction in Archaeology. Before delving into the practicalities of the technology it is important to assess where the field is at right now.


An image from Champion et al. (2012) showing the reconstructed city of Palenque.

If you look through the literature, 3D Reconstruction is often scarcely documented and results are limited. The three major critique points I have encountered are to do with accuracy, lack of human element and on use. Here is a brief overview:

  • Accuracy: studies lack background information on how the model was achieved, and create the false idea that the reconstruction is absolutely certain, while often it is simply one of many interpretations.
  • Lack of human element: based on Thomas (2004a; 2004b) and Tilley (2004), 3D Reconstruction is seen as a purely visual subject, alienated from human experience.
  • Use: 3D modelling is used simply to present sites, and are seen as add-ons. In reality, they provide great scope for interpretation.

The first paper I would like to mention is “Digital reconstruction and visualisation in archaeology” by Dell’Unto et al. (2012). On the subject of accuracy in 3D Reconstruction, Dell’Unto et al. propose the use of a series of levels of reconstruction: by identifying and recording the sources for each portion of the model, it is possible to assess the relative accuracy of each part. The first level of reconstruction is based on in situ elements, which are nearly certain, while the last level is dedicated to purely hypothetical reconstructions. This is a great approach as it means the modeller is accountable for the model, but at the same time they have the freedom to experiment as it is clear from the recording what they have done.


Levels of reconstruction by Dell’Unto et al. (2012).

On the topic of human experience, I can mention a few papers that implicitly refute the ideas proposed by Tilley and Thomas. While 3D modelling is indeed an exceptionally visual subject, it is not simply about looking at images. An entire current of thought deals with ‘presence’, the feeling of belonging a person gets when exploring a 3D environment. It seems that people get involved with the models to the point they ‘experience’ them, as if they were present on site. So while Tilley and Thomas refuse Visualisation as they prefer to explore the site in first person, I would argue that you can do that with 3D Reconstruction. An author I have come across which deals with ‘presence’ is Ch’ng (2009; Ch’ng and Stone 2006; Ch’ng et al. 2011), although I think the forerunner of this field is Chalmers (2002; Chalmers and DeBattista 2009; Devlin and Chalmers 2001; Devlin et al. 2003; Gutierrez et al. 2006). He has made exceptional steps in recreating archaeological sites with near perfect realism, in order to increase the sense of ‘presence’ people experience. His work on illumination is unmatched, and his articles are well worth a read.

chalmers and debattista

Different lighting effects as studied by Chalmers and DeBattista (2009).

Additionally the work of Dawson and Levy (2006; Dawson et al. 2007, 2011, 2013) are an exceptional testimony of how people respond to 3D environments. They recreated a Thule hut and then invited some members to explore them, recording their reactions and showing their emotional attachment.

Dawson and Levy are also prime examples of 3D Reconstruction being used for interpretation, as their analysis of hut building showed that there was significant reuse of bone structures. Many others have explored the utility of this type of technology, so it is hard to pinpoint individual papers of significance. Champion et al. (2012) use gaming software to educate users on the archaeology of the Palenque city. This is by far one of the best studies I have encountered, and I believe they are the forerunners of ‘serious games’ for archaeology.

champion 2

Another view of the ‘serious game’ by Champion et al. (2012).

Similarly the work of Forte et al. (2012) shows much promise in the same area. Finally, the work of Murgatroyd et al.(2011) is not strictly related to reconstruction, but the simulations they have run on Byzantine army movement is very important for understanding the reach of scripting, which can be combined to 3D reconstruction.

I hope this has provided you with an overlook of all the potential applications of 3D software. Over the next couple of weeks I aim to outline the reconstruction process, in order to open up the path to scripting.



Ch’ng, E. and Stone, R. J. (2006). 3D Archaeological Reconstruction and Visualisation: An Artificial Life Model for Determining Vegetation Dispersal Patterns in Ancient Landscapes. Proceedings of the International Conference on Computer Graphics, Imaging and Visualisation.

Ch’ng, E. (2009). Experimental archaeology: is virtual time travel possible? Journal of Cultural Heritage 10 pp.458-470.

Ch’ng, E., Chapman, H., Gaffney, V., Murgatrayd, P.. Gaffney, C. and Neubauer, W. (2011). From sites to landscapes: how computer technology is shaping archaeological practice. IEEE Computer Society 11 pp.40-46.

Chalmers, A. (2002). Very realistic graphics for visualising archaeological site reconstruction. Proceedings of the 18th Spring Conference on Computer Graphics pp. 7-12.

Chalmers, A. and DeBattista, K. (2009). Level of realism for serious games. 2009 Conference in Games and Virtual Worlds for Serious Applications pp.225-232.

Champion, E., Bishop, I. and Dave, B. (2012). The Palenque project: evaluating interaction in an online virtual archaeology site. Virtual Reality 16 pp.121-139.

Dawson, P., Levy, R., Gardner, D. and Walls M. (2007). Simulating the Behaviour of Light inside Arctic Dwellings: Implications for Assessing the Role of Vision in Task Performance. World Archaeology Vol.39 No.1 pp.17-35.

Dawson, P., Levy, R. and Lyons, N. (2011). “Breaking the fourth wall”: 3D virtual worlds as tools for knowledge repatriation in archaeology. Journal of Social Archaeology 11(3) pp.387-402.

Dawson, T., Vermehren, A., Miller, A., Oliver, I. and Kennedy, S. (2013). Digitally enhanced community rescue archaeology. Proceedings of First International Congress on Digital Heritage pp.29-36.

Devlin, K. and Chalmers, A. (2001). Realistic visualisation of the Pompeii frescoes. Proceedings of the 1st International Conference on Computer Graphics, Virtual Reality and Visualisation pp.43-48.

Devlin, K., Chalmers, A. and Brown, D. (2003). Predictive lighting and perception in archaeological representation. UNESCO World Heritage in the Digital Age.

Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.

Forte, M., Lercari, N., Onsurez, L., Issavi, J. and Prather, E. (2012). The Fort Ross Virtual Warehouse Project: A Serious Game for Research and Education. 18th International Conference on Virtual Systems and Multimedia pp.315-322.

Gutierrez, D., Sundstedt, V., Gomez, F. and Chalmers, A. (2006). Dust and light: predictive virtual archaeology. Journal of Cultural Heritage 8 pp.209-214.

Levy, R. and Dawson, P. (2006). Reconstructing a Thule whalebone house using 3D imaging. IEEE MultiMedia. Vol.13 No.2 pp.78-83.

Murgatroyd, P., Crenen, B., Theodoropoulos, G., Gaffney, V. and Haldon, J. (2011). Modelling medieval military logistics: an agent-based simulation of a Byzantine army on the march. Computational and Mathematical Organization Theory Vol.18 No.4 pp.488-506.

Thomas, J. (2004). Archaeology and Modernity. London: Routledge.

Thomas, J. (2004). The Great Dark Book: Archaeology, Experience, and Interpretation. In: Earle, T. and Pebbles, C. S. A Companion to Archaeology. Oxford: Blackwell Publishing pp.21-36.

Tilley, C. (2004). The materiality of stone: exploration in landscape phenomenology. Oxford: Berg.

PART 1 – Visualisation


“Visualisation” is a term that is used quite consistently in recent archaeological publications. It refers to the reconstruction of archaeological evidence through the use of computer software, although it originates in the practise of recording the site using 2D drawings which has been around for a few centuries. Although the meaning of the word and what it entails fluctuates somewhat, I’ve come to identify three types of technologies that fall within this category:

  • Photogrammetry
  • Laser Scanning
  • 3D Reconstruction

Photogrammetry is also referred to as Structure From Motion, and differs from the other methods as the 3D models are based on photographs (Pedersini et al. 2000). Similarly to laser scanning, the result is a high density point cloud, with photorealistic textures.

Laser scanning is a powerful tool widely used in large scale recording. It uses laser measurements to calculate the position of points in a site, and like Photogrammetry it produces a textured mesh, although generally laser scanning models are much more dense and therefore more accurate.

3D reconstruction is the method we will be primarily dealing with. It is less accurate than Photogrammetry and laser scanning, and the results are less realistic. It does however possess some distinct advantages. Reconstructed models are easily manipulated, and often represent elements of a site that have been lost (Fig.1). They can also be combined with gaming software to create interactive environments (I could cite many authors here, but just as a taster I would recommend reading Champion et al. 2012).

image 1

Fig.1 = 3D Reconstruction of the site of Ggantija, Gozo.

The three methods have very different aims, and as such it is important to know what you want to achieve before applying them:

  • For small and medium scale recording Photogrammetry is excellent (Scopigno 2012). It is very cheap and fast, and possesses the accuracy and visual effects that are necessary for recording and presenting. It is ideal for cataloguing finds or small scale excavations, although it can be used for larger features if necessary (see the current Must Farm excavation models by Donald Horne for more details: The fact it possesses lower points than laser scanning makes it easy to manage, and it requires little training.
  • For detailed models and large sites Laser Scanning is the tool of choice. More expensive and computationally challenging than Photogrammetry, laser scanning creates precise models that are perfect for recording, presenting and some interpreting. A great example is the work of John Meneele ( who has been analysing stone decay be comparing models taken in different years. I personally have little experience with laser scanning, but the results I have seen have shown a lot of promise.
  • 3D Reconstruction is mainly for presentation and interpretation. Although some arguments have been put forward on using 3D reconstructions for metadata recording, this is not where the technology shines (Reilly 1990; Barreau et al. 2013). 3D reconstructions can show a site “as it was” rather than “as it is”, leading to a more vivid understanding of archaeological contexts (Miller and Richards 1995; Lewin and Gross 1996). For the general public it is perfect, and it can be made highly interactive in order to further increase user comprehension of the archaeology. As for interpretation, the use of scripting allows a number of tools to be created in order to answer archaeological questions. One of the projects I have been working on was looking at analysing solar alignment at a Maltese site, and through a custom-written script I concluded the site is illuminated on the winter solstice (Fig.2). While Photogrammetry and Laser scanning shine with precision and photorealism, 3D Reconstruction truly dominates in presentation and interpretation.


overall image

Fig.2 – Overview of the script interface.

It is important however to mention how these methodologies are not mutually exclusive. Little work has been done in combining different techniques, but the results show much promise. A previous article on this blog discussed virtual museums, and the combination of a 3D reconstructed environment, with Photogrammetric models within.

In conclusion, there is a lot of technology put there, and although research is slowly unveiling the advantages of each there is much to be discovered still. With 3D Reconstruction we are barely scratching the surface, and only in the last 10 years we have had custom written scripts for archaeology. It may take a while, but once we uncover what is possible, archaeology will really reap the results.


In the next post I will be looking at previous work in 3D reconstruction, with a few examples of significant projects that have helped shape the methodology.



Barreau, J., Gaugne, R., Bernard, Y., Le Cloiree, G. and Gouranton, V. (2013). The West Digital Conservatory of Archaeological Heritage Project. Digital Heritage International Congress. Vol.1 pp.547-554.

Champion, E., Bishop, I. and Dave, B. (2012). The Palenque project: evaluating interaction in an online virtual archaeology site. Virtual Reality 16 pp.121-139.

Lewin, J. S. and Gross, M. D. (1996). Resolving Archaeological Site Data With 3D Computer Modeling: The Case of Ceren. Acadia pp. 255-266.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Pedersini, F., Sarti, A. and Tubaro, S. (2000). Automatic monitoring and 3d reconstruction applied to cultural heritage. Journal of Cultural Heritage 1 pp.301-313.

Reilly, P. (1990). Towards a virtual archaeology. In: Lockyear, K. and Rahtz, S. Computer Applications in Archaeology pp.133-139.

Scopigno, R. (2012). Sampled 3D models for Cultural Heritage: which uses beyond visualisation? Virtual Archaeology Review Vol.3 No.5 pp.109-115.

Agisoft Photoscan

If you have been following this blog for some time, you will know that when it comes to Photogrammetric reconstructions I have always been a strong supporter of 123D Catch by Autodesk. I find that it is by far the easiest program to use, yet the results are still amazing.

Recently they have released a paid Pro version that provides all the same results but allows the program to be used commercially. I think that is utterly brilliant, as it means it doesn’t halt research, but it still allows revenue for the developers if the user is himself making money from the software.

Having said all this, I’ve started investigating new software, to see if there is anything that can bring improvement to what I already have with 123D Catch. Up to now, the best solution I have found is Agisoft Photoscan, which I had already used in the past but not to its full extent.


Previously, I only managed to create low quality models, due to an error in the program, but I have now managed to make really good models of both objects and features. As such, here are the pros and cons of Photoscan compared to 123D Catch:


  • Quality: This is the best starting point. With medium and high settings the quality can be really good, and generally you can get many more points than 123D Catch. With lower settings however the results do fluctuate.
  • Control over process: If you are looking into more complex Photogrammetry this passage is very important. In 123D Catch you upload the images and get the result. That is it. In Photoscan you can go through all the stages (photo alignment, sparse cloud, dense cloud, mesh, texture) and change the settings to improve the finished product. You can even export the points as they are, and alter them using other software. It allows much more flexibility.
  • More tools: You can select points, create masks, remove points based on certain parameters and more. Often these are not needed, but on occasion they can be just what you require.
  • Many photos: 123D Catch struggles when you upload more than 50 photographs, and the results suffer. In Photoscan this issue doesn’t exist, and you can easily make models with hundreds of images. This is perfect for making large scale models, as the more images you have, the more accurate it is.
  • No internet required: Often you find yourself in situations in which the internet is not great or is totally non-existent. Photoscan doesn’t need a connecetion to an external server to process the model, so even if the computer is not connected, it still works.


  • Paid software: Although as software goes this is well priced, for people doing non-commercial work it is always difficult to have to keep up with the expenses for programs.
  • Memory: 123D Catch uses an external server, which means it does not use your own computing power. If you are trying to get high quality models, Photoscan will put a serious strain on your computer, and often it will even crash due to insufficient memory.
  • Time: I can get a model done in 123D Catch in 5 minutes if the internet is good. The worst I have ever had is probably an hour. With Photoscan I have had to wait actual days for models to be complete, and sometimes I waited many hours before getting an insufficient memory message.
  • Simplicity: If you are just starting out with Photogrammetry 123D Catch is still the easiest of all programs I have used.

Overall, I think I would use 123D Catch for small scale and quick stuff, while Photoscan would be useful for large scale, or when I am trying to research something specific, so I am more in control of the results.


Photogrammetric Recording of Subvertical Pits


Up to now in my blog I have been trying to outline the uses of Photogrammetry in the two main areas of archaeology, recording and interpretation. Some things I have discussed were specific to preserving as much data as possible of an archaeological feature or object, by creating a virtual copy of it. Other posts were concerned with what can be done once that model has been made, to further our understanding.

This post is mostly about recording a specific type of feature, but it opens up some possibilities to help interpret them as well.

On some occasions during archaeological excavations we happen to stumble across some particular pits that present particular difficulty when planning. In these cases the issue is that the sides of the pit are not gradual or even vertical, but they actually overcut the side, giving it a bulging shape. During an excavation at Ham Hill, Somerset, one of the pits there had this particular shape due to it’s use. It was probably used to store grain, and the presence of a smaller hole at the top meant that the preservation would have been better.


The plans draw of the pit were excellent, but even so it is difficult to convey the true shape of the feature using only 2D resources. I therefore created a model of it, by taking photographs like I normally would for a regular feature, with the addition of a few more from within the feature itself, by dropping the camera within it. The results are as follow:


Not only can you view the feature from the top, but it is even possible to see it from the sides, and rotate it that way, making it ever so clear how the feature was even now it is gone.


In addition to that, the bulge is much clearer, and it is easier to draw conclusions on its use. As a permanent record it is excellent, as not only do we not loose any information, but we can also gain more than what we could see being limited by the simple top view.

It also opens up new possibilities. As of yet I have not experimented much with Maya 3D, however I have had a brief overview of how the program works and what it is capable of. One of my colleagues once showed me how to reconstruct a pot from the profile, and then proceeded to calculate the capacity of the finished pot. Theoretically speaking it should be possible to import the finished model of the pit in Maya, and then use the same algorithm to calculate how much grain the pit could have had at a time, which could help understand the density of the population of the area, as well as a lot of other interesting questions.

And the technology doesn’t stop here. This may be a very specific example, but the same ideas can be applied to many different kinds of features. Those with particular bases can be easily recorded by making a model of them, or stone structures can be perfectly copied digitally rather than only drawn by hand. There is still a lot of applications to discover.


Bigger And Better: Photogrammetry Of Buildings


Photogrammetry is definitely the “new” thing in archaeology, slowly clawing its way into how we treat and record archaeological sites. As far as its potentials go though, there is still a lot of research to be done, in order to assess the uses that this technology has, but also the limits that we will have to deal with on a day to day basis.

One of the aspects that has always interested me is a question of scale. We’ve seen before that Photogrammetry deals well with anything ranging from an arrowhead to a small trench (4×6 is the maximum I have managed before, but there is much larger stuff out there). Smaller objects are hard to do cheaply, as the right lenses become essential, but larger stuff should be entirely possible to do with the right considerations in mind.

123D Catch’s website shows quite a few examples relating to Photogrammetric reconstruction of buildings, so I tried reconstructing my own, taking as a subject the front façade of Saint Paul’s cathedral in London.Given that this was mostly for experimental purposes, I just attempted a few things and went through the results looking for patterns and suggestions.

The results can be viewed here: 

Saint Paul Cathedral (click to view in 3D)


Saint Paul Cathedral

As we can see, the results are of course not marvellous, but I am less interested in the results than the actual process. I took 31 photographs of the building, standing in a single spot, taking as many pictures as necessary to include all parts of the façade and then moving to a slightly different angle. I tried to make sure that I got as much of the building in a single shot as possible, but the size of it and the short distance at which I was taking the photographs meant that I had to take more than one shot in some cases.


The lighting was of course not something I could control, but the fact that it was late afternoon meant that it was bright enough to be  visible, yet not too light that would bland the textures and cause problems with contrast. I then used 123D Catch to process the photographs, and to my surprise all of them were included in the final mesh.

The thing that surprises me the most is that given the photographs I had, the results are actually as good as my most hopeful prediction. There is a clear issue with the top of surfaces i.e. top of ledges that come out as slopes. This is totally expected, due to the fact that usually Photogrammetry works with images taken from different heights, while in this case the height couldn’t change. This however can be solved by taking images from the surrounding buildings.

The other major problem is the columns, that are in no way column shape, and that seem to mess up the structure behind them as well. Given the complexity of the structure this is also expected. 123D Catch imagines the mesh as a single flat surface, and tries to simplify whatever it finds as much as possible. In this case the columns are not flat, so the solution 123D Catch has come up with, given the limited data, is to bring forward the murky back and treat it as if it was the space between the columns. This is due to a large lack of data. Next time the solution will be to concentrate more on these trouble areas and take more numerous and calculated shots that can aid 123D Catch. It does require some more work and some careful assessment of the situation, but it is entirely possible to achieve.

Apart from these problems, the results are very promising. More work needs to be carried out, but it shows that it is possible to reconstruct structures of a certain size, hence pushing once again the limits of Photogrammetry.


Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):


Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1


Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2


Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2

Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.


My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.


In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.


If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

VisualSFM: Pros and Cons


I’ve been working with Photogrammetry for some years now, and although I use a great variety of programs to edit the 3D models, from Mehslab to Blender, when it comes to the actual creation of the models I have only ever used 123D Catch. This is partially due to the fact I now feel very comfortable using this program, having learnt what it requires and how to achieve the perfect model, but also due to great quality and simplicity that 123D Catch offers.

Only recently have I ventured out of my comfort zone to explore other Photogrammetry programs, to see if any can compare or even replace 123D Catch.

I had a spell with Agisoft Photoscan, with interesting but limited results, so now I have turned my attention to another piece of freeware called VisualSFM. At present I have only tried out a few things, so my opinions will certainly change in the future, but here are the pros and cons I have found up to now (compared to 123D Catch):


  • It’s multiplatform: only a few weeks ago another user commented on one of my posts asking me for alternatives to 123D Catch as he couldn’t use it on his Mac. VisualSFM is not restricted to Windows/Ipad App, but supports all major systems, making it a great tool for all those that don’t have a PC but still want to try out Photogrammetry.
  • It allows control of the process: one of the good things of 123D Catch is that it is easy to use, but this is to the expense of more expert users. Uploading images and getting results with a single click is great, but it’s difficult to understand what is actually happening in between. VisualSFM instead guides you through all the steps, so if anything goes wrong you can pinpoint the problem, or you can understand which photos are better for research purposes.


  • It works offline: many times I have found myself slowed down by a weak internet connection. With VisualSFM this is not necessary, which also means I can create models on site without the need of Wifi. Makes the whole process more efficient and means I spend less of my free time working on models.
  • The camera placement is more accurate: this one is still in testing, but up to now I have had no problems with cameras being placed in locations they are not meant to be in. With 123D Catch often a single photo stitches to the wrong place and causes the entire model to malfunction. With VisualSFM this doesn’t appear to be the case.


  • Less points: I compared a few models made with the same photographs by the two programs. The results suggest that VIsualSFM points may be placed more accurately, there is far less of them, making the overall model itself less accurate. In the pictures top is VisualSFM, bottom is 123D Catch.



  • Still haven’t finished a model: once the point cloud is created, VisualSFM has finished its job and it becomes Meshlab’s responsibility to actually recreate the surfaces and reattach the textures. Up to now I have not managed to do this. I’ve talked about Meshlab before, but in short it crashes and malfunctions all the time. It took me days to recreate the surface the first time as the program refused to do it, and attaching the texture is still something I can’t seem to manage.
  • Needs user control: with 123D Catch you can leave the program running and return to a finished product. With VisualSFM you have to constantly interact with the program, meaning you can’t multitask.

Overall, it’s got potential. It’s not going to replace 123D Catch any time soon, but I am still going to try different things out to see how it reacts and find any advantages. For a full description of how to create models using VisualSFM visit here:


Photogrammetric Model Made With Iphone 4s

Sheep 1

I’ve experimented before with using my Iphone to create Photogrammetric models (not through the app, just taking the photos and running it through the Windows version of 123D Catch), with interesting but not perfect results. The other day however I found myself with a nice complete in situ sheep skeleton and no camera, so I took the opportunity to test the technology once again.

I took 49 photos with a very good uniform shade, going round the skeleton at first and then concentrating on tricky parts, like the head or the ribs. I then run it through 123D Catch and found that almost all of them had been stitched. I think the lighting really did the trick, as it created a really nice contrast between the bones and the ground, The photos were taken just as the sun had set, so it was still very light, but with no glare.

sheep 5 sheep 4

The skeleton itself looks extremely good compared to some of my earlier tests. It can be viewed here in rotatable 3D:
I particularly like the relatively sharp edges that I really couldn’t achieve with the other models, and by looking at the cloud point I found it to be quite accurate regardless of textures. In addition to that it’s coped excellently with the rib that pokes out of the ground and the pelvis, both of which I was absolutely sure it would have a problem with. Overall I’d say the model was nearly as good as some of the models I have done with a standard camera, and I think the potential is definitely there.
The only issue I have with using the Iphone camera is that it’s still an unreliable method. I tried replicating the results today as it had been cleaned better, but the new model is more blurry, again probably due to slightly less ideal lighting conditions. Therefore I would still use my camera as much as possible, and save the Iphone for those situations in which I find myself unprepared.

sheep 2