How Video Games help present and interpret Neolithic Malta.


Hi all,

I gave a talk yesterday as part of Queen’s University Belfast PGR Talks. I had it recorded as I thought it would be interesting to share here some of my recent results.

I intended this presentation for those who may not be familiar with the topic, so it may be a bit vague at times, but it should provide a useful introduction to 3D reconstruction and gaming software.

Please feel free to share your thoughts and questions in the comment section.


Interpreting and Presenting Archaeological Sites Using 3D Reconstruction: Virtual Exploration of the Xaghra Brochtorff Circle in Gozo.

Unity 1.jpeg

Hi all,

Just as a heads up, I have uploaded my MPhil dissertation to, so go and check it out.

It’s available here.

It discusses 3D reconstruction in the Maltese islands, as well as using gaming software to analyse archaeological sites.

Feel free to comment here with any questions you may have. I’m hoping to increase communication on this platform this year!



Paper on Using Unity3D for Archaeological Interpretation

image paper.jpeg

Just a quick note to say that I have published an article for Archaeological Science: Reports regarding the use of a Unity3D script to calculate solar alignment at Ggantija, Gozo.

It is unfortunately not as open access as I would like, but I’ve been told that the article will be available for viewing for the next 50 days at,rVDBK0IJ

If you have an interest in using 3D Reconstruction for analysis do check it out, and feel free to get in contact at for more information.


Paul Reilly and the origins of 3D Reconstruction

Archaeology is all about looking at the past to understand the present, and in a similar guise to fully understand the basics of modern theory we have to delve into its origins. For this purpose, today I would like to take you back to 1989, when computer Visualisation was in its infancy.

The paper “Data Visualisation in Archaeology” by Paul Reilly (1989), and the later book “Archaeology and the Information Age” by Reilly and Rahtz (1992), were a crucial stepping stone for popularising 3D Reconstruction and introducing the theoretical background that is still important today.

3D Reconstruction saw its first applications in archaeology as early as 1985, when Woodward created a model of Roman Bath and of Caerleon, adapting software originally designed for industrial engineering (Smith 1985; Delooze and Wood 1991; Palamidese et al. 1993). Between then and 1989, a number of models had been created. Yet, the theoretical background was quite sterile, partly due to a division of roles between the researcher (archaeologist) and the modeller (computer designer).

Reilly’s paper “Data Visualisation in Archaeology” (1989) starts with a common problem in archaeology: the abundance of data. Due to the destructive nature of the excavation methodologies archaeologists resort to extensive recording of contexts, generating vast quantities of information in the process. Reilly demonstrates through examples what it it possible to achieve with this data. Apart from distribution maps which are more GIS territory, he uses examples of WINSOM models to demonstrate the potential for presentation of 3D modelling. More importantly, he argues that

“[Modelling] allows the researcher to demonstrate in strong visual terms how the interpretation relates directly to the collected data. […] it stimulates the researcher to look for further information. This may involve the application of extra analytical experiments on the existing data, or it may require the formulation of a completely new research design to answer the outstanding questions. – Reilly (1989) pp.577”


“[…] reconstructions require the modeller to define explicitly each and every element in the model and their spatial relationship to one another. The definition of the model forces the researchers to reconsider the original data, which can focus attention on problem areas and gaps, thus causing them to observe, or record differently, certain types of evidence in a future investigation. – Reilly (1989) pp.578“

These ideas are found again in a section Reilly contributed to Burridge et al. (1989), in which he argues that 3D Modelling can bring to light discrepancies in the original data.

Following “Data Visualisation in Archaeology”, Reilly published a series of articles that helped solidify his theories. Reilly and Shennan (1989) look at presentation, arguing that 3D navigation can help understand archaeological contexts by displaying large quantities of data in a small amount of time. “Towards a Virtual Archaeology” (1990) provides an overview of examples in 3D Reconstruction, and demonstrates the use in recreating monuments. It also outlines how this software could be applied to the teaching of archaeological excavation. In his 1991 contribution to Computing in Archaeology, he emphasises the importance for analysis and presenting, while also recognising that realistic models may lead to the assumption of “absolute truth”. Many of the concepts here expressed are still exceptionally relevant to modern theory, and have been debated by scholars for the three decades following Reilly’s publications.

His most important contribution is however “Archaeology and the Information Age”, edited with Rhatz (1992). This collection of truly fascinating articles are the founding stone for all future 3D Reconstruction, as well as other fields of digital media in archaeology. “Archaeology and the Information Age” explores the use of 3D for interpretation, arguing that pretty pictures should not be the main goal.  Through various examples, Reilly demonstrates the potential of 3D modelling for analysis, citing the reconstructions of Sulis Minervae, Bath and many others. Other authors in the book discuss issues of accuracy, simulation and subjectivity (I particularly enjoyed Molyneaux 1992).

Throughout the 1990s Visualisation saw an exceptional rise in popularity and the theoretical background developed in these years is still applicable today. Yet it all started with a handful of researchers, of which Reilly was the forefront (with the help of Shennan and Rahtz). If you are just starting to get into Visualisation, reading some of his works is a great place to start.



Burridge, J. M., Collins, B. M., Galton, B. N., Halbert, A. R., Heywood, T. R., Latham, W. H., Phippen, R. W., Quarendon, P., Reilly, P., Ricketts, M.V., Simmons, J., Todd, S. J. P., Walter, A. G. N. and Woodwark, J. R. (1989). The WINSOM solid modeller and its application to data visualisation. IBM Systems Journal pp.548-568.
Delooze, K. and Wood, J. (1991). Furness Abbey Survey Project – The Application of Computer Graphics and Data Visualisation to Reconstruction Modelling of an Historic Monument. Computer Applications and Quantitative Methods in Archaeology pp.140-148.
Molyneaux, B. (1992). From virtual to actuality: the archaeological site simulation environment. In: Reilly, P. and Rahtz, S. Archaeology in the Information Age pp.192-198.
Palamidese, P., Betro, M. and Muccioli, G. (1993). The Virtual Restoration of the Visir Tomb. Visualisation pp.420-424.
Reilly, P. (1989). Data visualisation in archaeology. IBM Systems Journal 28(4) pp.569-579.
Reilly, P. (1990). Towards a virtual archaeology. In: Lockyear, K. and Rahtz, S. Computer Applications in Archaeology pp.133-139.
Reilly, P. and Rahtz, S. (1992). Archaeology in the Information Age. Routledge: London.
Reilly, P. and Shennan, S. (1989). Applying Solid Modelling and Animated Three-Dimensional Graphics. Surface And Solid Modelling and Image Enhancement pp.157-165.
Smith, I. (1985). Sid and Dora’s bath show pulls in the crowd. Computing pp.7-8.


Functionality vs Realism in 3D Modelling

I’m currently looking through the literature on 3D reconstruction as part of my PhD, and I thought I would share here some useful points I am gathering through this process. I’ve specifically been looking at publications prior to the year 2001, attempting to discover the ideas that created this field in the first place and the theoretical and technical background to the methodology. Many of the arguments I am encountering have strong resonance with today, and are crucial to understand present discussion.

One of the most interesting points I have come across is the debate between those who strive for realism in reconstruction, and those who reject it.


Realistic Modelling

‘Realist’ archaeologists are harder to spot, as the computer limitations of the time allowed for little realism. Collins et al. (1993), Burton et al. (1997), Novitski (1998), Worthing and Counsell (1999) and Addison (2000) strive for photorealism in their models, tacitly expressing the need for faithful models. Later authors such as Guttierez et al. (2006) and especially Chalmers (Chalmers 2002, Devlin and Chalmers 2001, Chalmers and Debattista 2009) have attempted to create absolute models through the careful reconstruction of the environment, as well as of the architecture itself.


Functional modelling

In earlier literature the advocates of a functional style are more vocal. Fletcher and Spicer (1992), Salisbury et al. (1994), Winkerbach and Salesin (1994), Lansdown and Schofield (1995), Miller and Richards (1995), Pang et al. (1997), Roberts and Ryan (1997) and Strothotte et al. (1999) express dissatisfaction with realistic models, preferring a more subtle and accurate representation. Their main concern is with the risk of ‘absolute truth’, as

“[…] a photorealistic image […] suggests that detailed information has been
amassed about the objects being shown. Such images […] lead(s) viewers to the conclusion that the information is correct and contains a high degree of certainty and accuracy.” – Strothotte et al. (1999 p.1).



I have already discussed the issues of accuracy and the representation of uncertainty, so I will not delve into this subject at this point. It is however important to note that realistic and functional researchers are coming from two very distinct starting points. Realistic modelers tend to focus on the user experience. The reconstructions are designed for presentation, and the more visually stunning the result is, the more the users will feel involved with it. It is a path which leads to ‘presence’, haptic sheds and interactive models.

Frieman and Gillings (2007) analyse how people ‘perceive’ 3D reconstructions and virtual environments. Not only do they advocate for more realistic experiences, but they believe that this experience must encapsulate all senses.

“Instead, we have argued that, rather than analyse how space is viewed, we should fold vision back into the mix of the sensorium and focus instead on how space is perceived.” – Frieman and Gillings (2007 p.9).

Functional modelers are interested in the interpretation of archaeological data. In the words of van Dam et al:

“Scientific visualization isn’t an end in itself, but a component of many scientific tasks that typically involve some combination of interpretation and manipulation of scientific data and/or models. To aid understanding, the scientist visualizes the data to look for patterns, features, relationships, anomalies, and the like. Visualization should be thought of as task driven […]” – van Dam et al. (2000 p.27).

Although 3D Reconstructions are a visual means of presenting data, this does not mean they are an end product. They have the potential to be used to interpret, and as such they need to be simplified and abstract:

“In order to communicate […] complex information effectively, some form of visual abstraction is required.” – Winkerbach and Salesin (1994 p.1).

Functional modeling is ideal for the exchange of information, as too much detail can distract from the primary focus of the model. For presentation to the public, non-photorealistic models are not as involving, but they can be purposed to explain certain elements, and are especially important for the presentation of uncertainty (Winkerbach and Salesin 1994; Lansdown and Schofield 1995).


Intermediate models

Some authors approach the question differently, reaching conclusions that draw from both sides of the argument. One of the most interesting articles on this topic is Engaging places by Mark Gillings (1997). Gillings uses the term ‘hyperreal’ to describe 3D reconstructions, as for him the models are a separate entity from what they are a representation of, which go beyond the original. His main focus is on engagement. Researchers can strive for authenticity, but they will never be able to replicate the same conditions perfectly. No amount of detail inputted can accurately record the shape of every stone or the lighting of a room. Gillings however suggests that this is not even necessary, as the model’s ‘experiental depth’ is of higher importance.

Lansdown and Schofield (1995) emphasise the subjective nature of Visualisation, describing how even photographs are subjective. The way the position of the camera, the single-moment recording and the limitations of the frame mean the photographer has substantial input on the image. Similarly, even the most accurate of models are still based on subjective observations and the way they are presented cannot be objective or ‘perfect’.



Personally, I believe the division between realistic and functional models is unnecessary, as they deal with completely separate issues and are not truly in conflict with one another provided we adopt a thoughtful methodology. If the aims of the project are clear and the research thorough, then the models can assimilate aspects of either ideology. With regards to presentation, if the reconstruction’s aim is to explain or investigate specific elements, then a non-photorealistic model will provide a much better basis for research. If instead the project is used to create an emotional response from the user, photorealism will be more successful.



Addison, A. C. (2000). Emerging Trends in Virtual Heritage. IEEE Multimedia Vol.7 No.2 pp.22-25.

Burton, N. R., Hitchen, M. E. and Bryan, P.G. (1997). Virtual Stonehenge: a fall from disgrace? Proceedings of CAA 97 pp.16-21.

Chalmers, A. (2002). Very realistic graphics for visualising archaeological site reconstruction. Proceedings of the 18th Spring Conference on Computer Graphics pp. 7-12.

Chalmers, A. and DeBattista, K. (2009). Level of realism for serious games. 2009 Conference in Games and Virtual Worlds for Serious Applications pp.225-232.

Collins, B., Williams, D., Haak, R., Trux, M., Herz, H., Genevriez, L., Nicot, P., Brault, P., Coyere, X., Krause, B., Kluckow, J. and Paffenholz, A. (1993). The Dresden Frauenkirche – rebuilding the past.  In Wilcock, J. and Lockyear, K. Computer Applications and Quantitative Methods in Archaeology Oxford pp.19-24.

Devlin, K. and Chalmers, A. (2001). Realistic visualisation of the Pompeii frescoes. Proceedings of the 1st International Conference on Computer Graphics, Virtual Reality and Visualisation pp.43-48.

Fletcher, M. and Spicer, D. (1992). The display and analysis of ridge-and-furrow from topographically surveyed data. In: Reilly, P. and Rahtz, S. Archaeology in the Information Age pp.59-76.

Frieman, C. and Gillings, M. (2007). Seeing is perceiving? World Archaeology. Vol.39 No.1. Viewing space pp.4-16.

Gillings, M. (1997). Engaging Place: a Framework for the Integration and Realisation of Virtual-Reality Approaches in Archaeology. In: Dingwall, L., Exon, S., Gaffney, V., Laflin, S. and van Leusen, M. Archaeology in the Age of the Internet.

Gutierrez, D., Sundstedt, V., Gomez, F. and Chalmers, A. (2006). Dust and light: predictive virtual archaeology. Journal of Cultural Heritage 8 pp.209-214.

Lansdown, J. and Schofield, S. (1995). Expressive Rendering: A Review of Nonphotorealistic Techniques. IEEE Computer Graphics and Applications pp.29-37.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Novitski, B. J. (1998). Reconstructing lost architecture. Computer Graphics World Vol.21 No.2.

Pang, A. T., Wittenbrink, C. M. and Lodha, S. K. (1997). Approaches to uncertainty visualisation. The Visual Computer pp.370-390.

Roberts, J. C. and Ryan, N. (1997). Alternative Archaeological Representations within Virtual Worlds. In: Brown, R. 4th UK Virtual Reality Specialist Interest Group Conference – Brunel University pp.182-196.

Salisbury, M. P., Anderson, S. E., Barzel, R. and Salesin, D. H. (1994). Interactive pen-and-ink illustration. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques pp.101-108.

Strothotte, T., Masuch, M. and Isenberg, T. (1999). Visualizing Knowledge about Virtual Reconstructions of Ancient Architecture. Computer Graphics International.

Van Dam, A., Forsberg, A. S., Laidlaw, D. H., LaViola, J. J., and Simpson, R. M. (2000). Immersive VR for Scientific Visualisation: A Progress Report. Computer Graphics and Applications Vol.20 No.6 pp.26-52.

Winkenbach, G. and Salesin, D., H. (1994). Computer-generated pen-and-ink illustration. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques pp.91-100.

Worthing, D. and Counsell, J. (1999). Issues arising from computer-based recording of heritage sites. Structural Survey Vol.17 No.4 pp.200-210.

A Unity 3D script for displaying uncertainty in 3D Reconstructions.

Following up from my latest post, I wanted to share with you one of the solutions I used as part of my MPhil project.

As discussed, a real problem with 3D Reconstruction in archaeology is the subjectivity of the modelling process. While Photogrammetry and Laser Scanning record archaeological features as they are, reconstructions rely on information that is more or less inaccurate. Strothotte et al. (1999) pointed out that uncertainty in Visualisation is caused by imprecision and incompleteness. Site reports tend to present a limited range of data, usually in a 2D format that may not translate to 3D which causes imprecision (Worthing and Counsell 1999). And while interpretation of a site can be built on incomplete information, 3D Reconstruction requires answers to very specific questions that rarely can be answered by the archaeological evidence.

In a bid to display this uncertainty, projects have displayed the model entirely using wireframes or point clouds (Richards 1998). I propose a similar solution: by using a Unity 3D script heavily based on work by Naojitaniguchi (2015), we have different toggles to switch between fully rendered, wireframe and removed. The most uncertain elements are tagged for removal, leaving the more accurate features intact.

Within the scene, the wireframe looks like so:


Wireframe 2Wireframe

The Player script initiates the script by attaching it to the objects targeted for removal upon the input of a button:

void RemoveArchaeology(){
		if (Input.GetAxisRaw ("Remove") != 0 && archaeologyRemoved == 0 &&
			removeAxisInUse == false) {
			archaeologyRemoved = 1;
			removeAxisInUse = true;

			foreach (GameObject go in gameObjectArray) {

				go.gameObject.AddComponent ();
		} else if (Input.GetAxisRaw ("Remove") != 0 && archaeologyRemoved == 1 &&
			removeAxisInUse == false) {
			archaeologyRemoved = 2;
			removeAxisInUse = true;

			foreach (GameObject go in gameObjectArray) {

				VertexRenderer vertexRenderer = go.GetComponent ();
				vertexRenderer.RevertToStart ();
				Destroy (vertexRenderer);
				go.SetActive (false);
		}	else if (Input.GetAxisRaw ("Remove") != 0 && archaeologyRemoved == 2 &&
			removeAxisInUse == false){
			archaeologyRemoved = 0;
			removeAxisInUse = true;

			foreach (GameObject go in gameObjectArray) {
				go.SetActive (true);

		if (Input.GetAxisRaw ("Remove") == 0){
			removeAxisInUse = false;

Then, the VertexRenderer script replaces the model with lines:

using UnityEngine;
using System.Collections;

		//Code written by R. P. Barratt
		//Heavily based on a script by Naojitaniguchi (2015).

		//Renders the selected elements as lines.

public class VertexRenderer : MonoBehaviour {

	public Color lineColor;
	public Color backgroundColor;

	private Vector3[] lines;
	private ArrayList linesArray;
	private Material lineMaterial;
	private MeshRenderer meshRenderer;
	private Material initialMaterial;

	public void Start () {

		//Finds the components.
		GetComponent<Renderer> ().enabled = false;
		meshRenderer = GetComponent<MeshRenderer> ();
		if (!meshRenderer) {
			meshRenderer = gameObject.AddComponent<MeshRenderer> ();

		//Saves the initial material.
		SaveInfo ();

		//Finds the line material and sets it.
		Shader shader1 = Shader.Find ("Lines/Background");
		meshRenderer.material = new Material (shader1);
		Shader shader2 = Shader.Find ("Lines/Colored Blended");
		lineMaterial = new Material (shader2);
		lineMaterial.hideFlags = HideFlags.HideAndDontSave;
		lineMaterial.shader.hideFlags = HideFlags.HideAndDontSave; 

		//Creates a list of lines based on the mesh.
		linesArray = new ArrayList ();
		MeshFilter filter = GetComponent<MeshFilter> ();
		Mesh mesh = filter.sharedMesh;
		Vector3[] vertices = mesh.vertices;
		int[] triangles = mesh.triangles; 

		for (int i = 0; i < triangles.Length / 3; i++) {
			linesArray.Add (vertices [triangles [i * 3]]);
			linesArray.Add (vertices [triangles [i * 3 + 1]]);
			linesArray.Add (vertices [triangles [i * 3 + 2]]);

		lines = new Vector3[triangles.Length];
		for (int i = 0; i < triangles.Length; i++) {
			lines [i] = (Vector3)linesArray [i];

		//Sets material.
		meshRenderer.sharedMaterial.color = backgroundColor;
		lineMaterial.SetPass (0);

	public void OnRenderObject(){

		//Draws lines based on mesh.
		GL.PushMatrix ();
		GL.MultMatrix (transform.localToWorldMatrix);
		GL.Begin (GL.LINES);
		GL.Color (lineColor); 

		for (int i = 0; i < lines.Length / 3; i++) {
			GL.Vertex (lines [i * 3]);
			GL.Vertex (lines [i * 3 + 1]); 

			GL.Vertex (lines [i * 3 + 1]);
			GL.Vertex (lines [i * 3 + 2]); 

			GL.Vertex (lines [i * 3 + 2]);
			GL.Vertex (lines [i * 3]);

		GL.End ();
		GL.PopMatrix ();

	void SaveInfo(){

		//Saves initial material.
		initialMaterial = meshRenderer.material;

	public void RevertToStart(){

		//Returns to initial material.
		GetComponent<Renderer> ().enabled = true;
		meshRenderer.material = initialMaterial;

The script requires Unity3D to run at the moment, but I am sure it can be adapted for other platforms.

The reason for this script is to allow users to choose different options, displaying the finished model with a more realistic skin but also allowing for a more accurate representation.


Naojitaniguchi (2015). WireFrame. Available: Last accessed 20th Oct 2017.

Richards, J. D. (1998). Recent Trends in Computer Applications in Archaeology. Journal of Archaeological Research Vol.6 No.4 pp.331-382.

Strothotte, T., Masuch, M. and Isenberg, T. (1999). Visualizing Knowledge about Virtual Reconstructions of Ancient Architecture. Computer Graphics International.

Worthing, D. and Counsell, J. (1999). Issues arising from computer-based recording of heritage sites. Structural Survey Vol.17 No.4 pp.200-210.

Part 4 – SketchUp Basics

In the last sections we have been looking at a bit of the theory concerning 3D Reconstruction. Hopefully we have set the foundations for discussion in site presentation and interpretation that we will return to in further sections. It is however time to start talking about one of the most commonly used programs for reconstruction, Google SketchUp.

Fig.1 – An overview of Google SketchUp.

The first question is why SketchUp. Out of all the software that is available, I think this is the easiest to learn. Programs such as 3Ds Max have many advantages that set them aside, yet for all the limitations SketchUp may have it is perfect for those who have little knowledge of modelling software. The range of tools is enough to create fairly accurate models, and combining it with V-Ray produces good quality 2D renders. Additionally, the interface is intuitive. The only elements I would criticise are rounded surfaces and textures, which are much harder to work with and may require additional software such as Photoshop.

For navigation, Google Sketchup utilises a combination of a panning, a rotating and a zooming tool. These are used to move around the scene without interfering with the model itself. Similarly, a moving, a rotating and a scaling tool allows manipulation of an object in the scene. The main feature to use though is the push/pull tool, that allows simple surfaces to become three dimensional elements.

When modelling a site, I tend to start from a plan. It is in fact much easier to start from the ground surface upwards. you can easily drag and drop a plan into the scene. If you want the model to be to scale, the plan itself can be adjusted by using the measuring and scaling tools accordingly. Once the plan is in place it is possible to trace over all major features, to start creating a floor plan of site. The ‘pencil’ can be used to trace over most lines, but the rectangle, circle and arc tools are also essential for some portions of the plans.Curved surfaces cause some issues with the software, so I would tend to use smaller straight lines whenever possible.

Fig.2 – Very simple pencil drawing over a 2 dimensional plan of a stone circle.

Once the plan is sketched out, it is time to push and pull surfaces. With any luck, the surfaces should have closed automatically, by finishing off every line at the same point it started. In some cases the line tends to snap to the axis, so beware if it becomes a certain colour before clicking. The push/pull tool allows you to extend the selected surface outward or inwards, and it can be used to give objects height or depth. In the bottom corner SketchUp informs you on the distance that the surface is being extended, so it is possible to apply precise calculations. Most of the reconstruction process involves drawing surfaces upon other surfaces and pulling/pushing them to create the details of the model.

Fig 3 – One of the stones after having used the push/pull tool.

One of the functions however that I find the most useful is the ‘component’ tool, which divides different elements into compact objects that don’t interfere with one another. Often pulling a surface over another can cause the two to become conjoined, making it difficult to modify the specific part after the initial modelling. By selecting a single entity in the model (for example, a table) and making it into a component you can then move it freely without worrying about the rest of the model, as well as copying it without having to remodel it. Ideally, every different part of the reconstruction should be its own component, part of a hierarchy. Larger components are themselves composed of smaller components, so that it is possible to operate at many different levels. Using components efficiently also simplifies the transition between SketchUp and Unity3D. Bear in mind however that components that have been copied will maintain a connection with the original, and any change that is applied to a single ‘instance’ of a component will also be applied to all other copies.

Fig.4 – A rough component of a megalithic stone, made using lines and push/pulling.

In the next part I will continue the discussion regarding Google SketchUp modelling to incorporate detail building, rendering and materials.

Part 3 – Sources and Paradata

Before going into the bulk of how to model an archaeological site and why do it, I would like to spend a moment discussing the research that should be at the basis of the model itself. The fact that 3D Reconstruction is in its infancy brings many advantages and disadvantages to the table. On the one part, it is exciting to think there is so much we do not know as it means endless applications are there just waiting to be discovered. On the other hand however, there is a distinct lack of consistent methodology between projects and while some publication are clearly founded on extensive research (Dawson et al. 2011 amongst many others), others seem to be more loosely interpreted.

Fig.1 – The first steps in modelling, based on a plan of the site to scale.

This is one of the reasons behind ‘paradata’, a term that has recently been applied to the field. To understand paradata we need to first discuss metadata. Especially in computer science, many authors have lamented the inability to replicate experiments involving code (Hafer and Kirkpatrick 2009; Boon 2009; Ducke 2012; Hayashi 2012). In the words of Marwick:

“This ability to reproduce the results of other researchers is a core tenet of scientific method, and when reproductions are successful, our field advances (Marwick 2016, pp.1).”


“A study is reproducible if there is a specific set of computational functions/analyses (usually specified in terms of code) that exactly reproduce all of the numbers and data visualizations in a published paper from raw data (Marwick 2016, pp.4).”

Essentially the debate is that publishing results is not enough, but that instead we should include additional information, such as settings used in a program or the raw code. This collection of information is referred to as ‘metadata’. Some authors on the other hand have taken it a step forward, arguing that we should include descriptions of the process, a discussion of the choices made and the probabilities (Denard 2009; Beacham 2011, D’Andrea and Fernie 2013). This ‘paradata’ is best described in the London Charter, which is the first attempt to creating a methodology in 3D Reconstruction:

“Documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of computer-based visualisation should be disseminated in such a way that the relationship between research sources, implicit knowledge, explicit reasoning, and visualisation-based outcomes can be understood (Denard 2009, pp.8-9).”

Given that a major critique in 3D Reconstruction is accuracy (Miller and Richards 1995; Richards 1998; Devlin et al. 2003; Johnson et al. 2009), paradata is our way to defend ourselves. While it is impossible to create the perfect model, by demonstrating the process behind the reconstruction allows a user to understand the interpretation given and draw their own conclusions.

Roman Villa 4
Fig.2 – A highly speculative Roman Villa. Without knowledge of the process it is impossible to know how accurate each element is.

One of the elements we have mentioned previously are sources. While the process itself has to be methodical in order to gain accurate results, the sources provide the wireframe upon which the interpretation can take place. It therefore essential that the sources are well researched and well documented. For this purpose I like the classification proposed by Dell’Unto et al. (2011), which sees different categories based on accuracy:

  • Reconstruction by Objectivity: sources based on in situ elements, like plans, 3D scans, archives.
  • Reconstruction by Testimony: illustrations, literary sources, notes.
  • Reconstruction by Deduction: elements that can be deduced from in situ remains, but that are not actually there.
  • Reconstruction by Comparison: based on other sites, this is actually quite an important one as a lot of features carry on between remains of the same regions.
  • Reconstruction by Analogy of Styles: especially for decoration, looking at other stylistic elements that have been preserved can help make the whole model look more realistic.
  • Reconstruction by Hypothesis: an essential part of reconstruction, but the most inaccurate.

Of course, the more we go down this ladder, the more inaccurate the sources are. Yet it is by combining all the different sources that we get the finished model. Paradata can help with determining which sources were used for each part of the model, and therefore provide information of the model as a whole.

Fig 3 = Partly completed model of a Greek house, based on plan, excavation reports and site comparison.

In conclusion, there are many sources that can be used when constructing a model and although some are more precise than others, all of them contribute to the final result. If they are applied methodologically and the process is recorded, we can provide an accurate and reliable reconstruction.

Over the next posts I will start looking at SketchUp for modelling, although the ideas will carry over to other software such as 3Ds Max.


Beacham, R. C. (2011). Concerning the Paradox of Paradata. Or, “I don’t want realism; I want magic!”. Virtual Archaeology Review Vol.2 No.4 pp.49-52.

Boon, P., Van Der Maaten, L., Paijmans, H., Postma, E. and Lange, G. (2009). Digital Support for Archaeology. Interdisciplinary Science Reviews 34:2-3 pp.189-205.

D’Andrea, A. and Fernie, K. (2013). CARARE 2.0: a metadata schema for 3D Cultural Objects. Digital Heritage International Congress Vol.2 pp.137-143.

Dawson, P., Levy, R. and Lyons, N. (2011). “Breaking the fourth wall”: 3D virtual worlds as tools for knowledge repatriation in archaeology. Journal of Social Archaeology 11(3) pp.387-402.

Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.

Denard, H. (2009). The London Charter: for the computer-based visualisation of cultural heritage.

Devlin, K., Chalmers, A. and Brown, D. (2003). Predictive lighting and perception in archaeological representation. UNESCO World Heritage in the Digital Age.

Ducke, B. (2012). Natives of a connected world: free and open source software in archaeology. World Archaeology 44:4 pp.571-579.

Hafer, L. and Kirkpatrick, A. E. (2009). Assessing Open Source Software as a Scholarly Contribution. Communication of the ACM Vol.52 No.12 pp.126-129.

Hayashi, T. (2012). Source Code Publishing on World Wide Web. International Conference on Advanced Information Networking and Applications Workshops pp.35-40.

Johnson, D. S. (2009). Testing Geometric Authenticity: Standards, Methods, and Criteria for Evaluating the Accuracy and Completeness of Archaeometric Computer Reconstructions. Visual Resources 25:4 pp.333-344.

Marwick, B. (2016). Computational Reproducibility in Archaeological Research: Basic Principles and a Case Study of Their Implementation. Journal of Archaeological Method and Theory pp.1-27.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Richards, J. D. (1998). Recent Trends in Computer Applications in Archaeology. Journal of Archaeological Research Vol.6 No.4 pp.331-382.