Part 4 – SketchUp Basics

In the last sections we have been looking at a bit of the theory concerning 3D Reconstruction. Hopefully we have set the foundations for discussion in site presentation and interpretation that we will return to in further sections. It is however time to start talking about one of the most commonly used programs for reconstruction, Google SketchUp.

1.JPG
Fig.1 – An overview of Google SketchUp.

The first question is why SketchUp. Out of all the software that is available, I think this is the easiest to learn. Programs such as 3Ds Max have many advantages that set them aside, yet for all the limitations SketchUp may have it is perfect for those who have little knowledge of modelling software. The range of tools is enough to create fairly accurate models, and combining it with V-Ray produces good quality 2D renders. Additionally, the interface is intuitive. The only elements I would criticise are rounded surfaces and textures, which are much harder to work with and may require additional software such as Photoshop.

For navigation, Google Sketchup utilises a combination of a panning, a rotating and a zooming tool. These are used to move around the scene without interfering with the model itself. Similarly, a moving, a rotating and a scaling tool allows manipulation of an object in the scene. The main feature to use though is the push/pull tool, that allows simple surfaces to become three dimensional elements.

When modelling a site, I tend to start from a plan. It is in fact much easier to start from the ground surface upwards. you can easily drag and drop a plan into the scene. If you want the model to be to scale, the plan itself can be adjusted by using the measuring and scaling tools accordingly. Once the plan is in place it is possible to trace over all major features, to start creating a floor plan of site. The ‘pencil’ can be used to trace over most lines, but the rectangle, circle and arc tools are also essential for some portions of the plans.Curved surfaces cause some issues with the software, so I would tend to use smaller straight lines whenever possible.

2
Fig.2 – Very simple pencil drawing over a 2 dimensional plan of a stone circle.

Once the plan is sketched out, it is time to push and pull surfaces. With any luck, the surfaces should have closed automatically, by finishing off every line at the same point it started. In some cases the line tends to snap to the axis, so beware if it becomes a certain colour before clicking. The push/pull tool allows you to extend the selected surface outward or inwards, and it can be used to give objects height or depth. In the bottom corner SketchUp informs you on the distance that the surface is being extended, so it is possible to apply precise calculations. Most of the reconstruction process involves drawing surfaces upon other surfaces and pulling/pushing them to create the details of the model.

3
Fig 3 – One of the stones after having used the push/pull tool.

One of the functions however that I find the most useful is the ‘component’ tool, which divides different elements into compact objects that don’t interfere with one another. Often pulling a surface over another can cause the two to become conjoined, making it difficult to modify the specific part after the initial modelling. By selecting a single entity in the model (for example, a table) and making it into a component you can then move it freely without worrying about the rest of the model, as well as copying it without having to remodel it. Ideally, every different part of the reconstruction should be its own component, part of a hierarchy. Larger components are themselves composed of smaller components, so that it is possible to operate at many different levels. Using components efficiently also simplifies the transition between SketchUp and Unity3D. Bear in mind however that components that have been copied will maintain a connection with the original, and any change that is applied to a single ‘instance’ of a component will also be applied to all other copies.

5
Fig.4 – A rough component of a megalithic stone, made using lines and push/pulling.

In the next part I will continue the discussion regarding Google SketchUp modelling to incorporate detail building, rendering and materials.

Part 3 – Sources and Paradata

Before going into the bulk of how to model an archaeological site and why do it, I would like to spend a moment discussing the research that should be at the basis of the model itself. The fact that 3D Reconstruction is in its infancy brings many advantages and disadvantages to the table. On the one part, it is exciting to think there is so much we do not know as it means endless applications are there just waiting to be discovered. On the other hand however, there is a distinct lack of consistent methodology between projects and while some publication are clearly founded on extensive research (Dawson et al. 2011 amongst many others), others seem to be more loosely interpreted.

324t343t
Fig.1 – The first steps in modelling, based on a plan of the site to scale.

This is one of the reasons behind ‘paradata’, a term that has recently been applied to the field. To understand paradata we need to first discuss metadata. Especially in computer science, many authors have lamented the inability to replicate experiments involving code (Hafer and Kirkpatrick 2009; Boon 2009; Ducke 2012; Hayashi 2012). In the words of Marwick:

“This ability to reproduce the results of other researchers is a core tenet of scientific method, and when reproductions are successful, our field advances (Marwick 2016, pp.1).”

and

“A study is reproducible if there is a specific set of computational functions/analyses (usually specified in terms of code) that exactly reproduce all of the numbers and data visualizations in a published paper from raw data (Marwick 2016, pp.4).”

Essentially the debate is that publishing results is not enough, but that instead we should include additional information, such as settings used in a program or the raw code. This collection of information is referred to as ‘metadata’. Some authors on the other hand have taken it a step forward, arguing that we should include descriptions of the process, a discussion of the choices made and the probabilities (Denard 2009; Beacham 2011, D’Andrea and Fernie 2013). This ‘paradata’ is best described in the London Charter, which is the first attempt to creating a methodology in 3D Reconstruction:

“Documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of computer-based visualisation should be disseminated in such a way that the relationship between research sources, implicit knowledge, explicit reasoning, and visualisation-based outcomes can be understood (Denard 2009, pp.8-9).”

Given that a major critique in 3D Reconstruction is accuracy (Miller and Richards 1995; Richards 1998; Devlin et al. 2003; Johnson et al. 2009), paradata is our way to defend ourselves. While it is impossible to create the perfect model, by demonstrating the process behind the reconstruction allows a user to understand the interpretation given and draw their own conclusions.

Roman Villa 4
Fig.2 – A highly speculative Roman Villa. Without knowledge of the process it is impossible to know how accurate each element is.

One of the elements we have mentioned previously are sources. While the process itself has to be methodical in order to gain accurate results, the sources provide the wireframe upon which the interpretation can take place. It therefore essential that the sources are well researched and well documented. For this purpose I like the classification proposed by Dell’Unto et al. (2011), which sees different categories based on accuracy:

  • Reconstruction by Objectivity: sources based on in situ elements, like plans, 3D scans, archives.
  • Reconstruction by Testimony: illustrations, literary sources, notes.
  • Reconstruction by Deduction: elements that can be deduced from in situ remains, but that are not actually there.
  • Reconstruction by Comparison: based on other sites, this is actually quite an important one as a lot of features carry on between remains of the same regions.
  • Reconstruction by Analogy of Styles: especially for decoration, looking at other stylistic elements that have been preserved can help make the whole model look more realistic.
  • Reconstruction by Hypothesis: an essential part of reconstruction, but the most inaccurate.

Of course, the more we go down this ladder, the more inaccurate the sources are. Yet it is by combining all the different sources that we get the finished model. Paradata can help with determining which sources were used for each part of the model, and therefore provide information of the model as a whole.

house6
Fig 3 = Partly completed model of a Greek house, based on plan, excavation reports and site comparison.

In conclusion, there are many sources that can be used when constructing a model and although some are more precise than others, all of them contribute to the final result. If they are applied methodologically and the process is recorded, we can provide an accurate and reliable reconstruction.

Over the next posts I will start looking at SketchUp for modelling, although the ideas will carry over to other software such as 3Ds Max.

REFERENCES:

Beacham, R. C. (2011). Concerning the Paradox of Paradata. Or, “I don’t want realism; I want magic!”. Virtual Archaeology Review Vol.2 No.4 pp.49-52.

Boon, P., Van Der Maaten, L., Paijmans, H., Postma, E. and Lange, G. (2009). Digital Support for Archaeology. Interdisciplinary Science Reviews 34:2-3 pp.189-205.

D’Andrea, A. and Fernie, K. (2013). CARARE 2.0: a metadata schema for 3D Cultural Objects. Digital Heritage International Congress Vol.2 pp.137-143.

Dawson, P., Levy, R. and Lyons, N. (2011). “Breaking the fourth wall”: 3D virtual worlds as tools for knowledge repatriation in archaeology. Journal of Social Archaeology 11(3) pp.387-402.

Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.

Denard, H. (2009). The London Charter: for the computer-based visualisation of cultural heritage.

Devlin, K., Chalmers, A. and Brown, D. (2003). Predictive lighting and perception in archaeological representation. UNESCO World Heritage in the Digital Age.

Ducke, B. (2012). Natives of a connected world: free and open source software in archaeology. World Archaeology 44:4 pp.571-579.

Hafer, L. and Kirkpatrick, A. E. (2009). Assessing Open Source Software as a Scholarly Contribution. Communication of the ACM Vol.52 No.12 pp.126-129.

Hayashi, T. (2012). Source Code Publishing on World Wide Web. International Conference on Advanced Information Networking and Applications Workshops pp.35-40.

Johnson, D. S. (2009). Testing Geometric Authenticity: Standards, Methods, and Criteria for Evaluating the Accuracy and Completeness of Archaeometric Computer Reconstructions. Visual Resources 25:4 pp.333-344.

Marwick, B. (2016). Computational Reproducibility in Archaeological Research: Basic Principles and a Case Study of Their Implementation. Journal of Archaeological Method and Theory pp.1-27.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Richards, J. D. (1998). Recent Trends in Computer Applications in Archaeology. Journal of Archaeological Research Vol.6 No.4 pp.331-382.

Part 2 – 3D Reconstruction Literature

In this section I would like to go through some projects I have been reading about that I think are very useful for understanding 3D Reconstruction in Archaeology. Before delving into the practicalities of the technology it is important to assess where the field is at right now.

champion
An image from Champion et al. (2012) showing the reconstructed city of Palenque.

If you look through the literature, 3D Reconstruction is often scarcely documented and results are limited. The three major critique points I have encountered are to do with accuracy, lack of human element and on use. Here is a brief overview:

  • Accuracy: studies lack background information on how the model was achieved, and create the false idea that the reconstruction is absolutely certain, while often it is simply one of many interpretations.
  • Lack of human element: based on Thomas (2004a; 2004b) and Tilley (2004), 3D Reconstruction is seen as a purely visual subject, alienated from human experience.
  • Use: 3D modelling is used simply to present sites, and are seen as add-ons. In reality, they provide great scope for interpretation.

The first paper I would like to mention is “Digital reconstruction and visualisation in archaeology” by Dell’Unto et al. (2012). On the subject of accuracy in 3D Reconstruction, Dell’Unto et al. propose the use of a series of levels of reconstruction: by identifying and recording the sources for each portion of the model, it is possible to assess the relative accuracy of each part. The first level of reconstruction is based on in situ elements, which are nearly certain, while the last level is dedicated to purely hypothetical reconstructions. This is a great approach as it means the modeller is accountable for the model, but at the same time they have the freedom to experiment as it is clear from the recording what they have done.

dell'unto
Levels of reconstruction by Dell’Unto et al. (2012).

On the topic of human experience, I can mention a few papers that implicitly refute the ideas proposed by Tilley and Thomas. While 3D modelling is indeed an exceptionally visual subject, it is not simply about looking at images. An entire current of thought deals with ‘presence’, the feeling of belonging a person gets when exploring a 3D environment. It seems that people get involved with the models to the point they ‘experience’ them, as if they were present on site. So while Tilley and Thomas refuse Visualisation as they prefer to explore the site in first person, I would argue that you can do that with 3D Reconstruction. An author I have come across which deals with ‘presence’ is Ch’ng (2009; Ch’ng and Stone 2006; Ch’ng et al. 2011), although I think the forerunner of this field is Chalmers (2002; Chalmers and DeBattista 2009; Devlin and Chalmers 2001; Devlin et al. 2003; Gutierrez et al. 2006). He has made exceptional steps in recreating archaeological sites with near perfect realism, in order to increase the sense of ‘presence’ people experience. His work on illumination is unmatched, and his articles are well worth a read.

chalmers and debattista
Different lighting effects as studied by Chalmers and DeBattista (2009).

Additionally the work of Dawson and Levy (2006; Dawson et al. 2007, 2011, 2013) are an exceptional testimony of how people respond to 3D environments. They recreated a Thule hut and then invited some members to explore them, recording their reactions and showing their emotional attachment.

Dawson and Levy are also prime examples of 3D Reconstruction being used for interpretation, as their analysis of hut building showed that there was significant reuse of bone structures. Many others have explored the utility of this type of technology, so it is hard to pinpoint individual papers of significance. Champion et al. (2012) use gaming software to educate users on the archaeology of the Palenque city. This is by far one of the best studies I have encountered, and I believe they are the forerunners of ‘serious games’ for archaeology.

champion 2
Another view of the ‘serious game’ by Champion et al. (2012).

Similarly the work of Forte et al. (2012) shows much promise in the same area. Finally, the work of Murgatroyd et al.(2011) is not strictly related to reconstruction, but the simulations they have run on Byzantine army movement is very important for understanding the reach of scripting, which can be combined to 3D reconstruction.

I hope this has provided you with an overlook of all the potential applications of 3D software. Over the next couple of weeks I aim to outline the reconstruction process, in order to open up the path to scripting.

 

REFERENCES:

Ch’ng, E. and Stone, R. J. (2006). 3D Archaeological Reconstruction and Visualisation: An Artificial Life Model for Determining Vegetation Dispersal Patterns in Ancient Landscapes. Proceedings of the International Conference on Computer Graphics, Imaging and Visualisation.

Ch’ng, E. (2009). Experimental archaeology: is virtual time travel possible? Journal of Cultural Heritage 10 pp.458-470.

Ch’ng, E., Chapman, H., Gaffney, V., Murgatrayd, P.. Gaffney, C. and Neubauer, W. (2011). From sites to landscapes: how computer technology is shaping archaeological practice. IEEE Computer Society 11 pp.40-46.

Chalmers, A. (2002). Very realistic graphics for visualising archaeological site reconstruction. Proceedings of the 18th Spring Conference on Computer Graphics pp. 7-12.

Chalmers, A. and DeBattista, K. (2009). Level of realism for serious games. 2009 Conference in Games and Virtual Worlds for Serious Applications pp.225-232.

Champion, E., Bishop, I. and Dave, B. (2012). The Palenque project: evaluating interaction in an online virtual archaeology site. Virtual Reality 16 pp.121-139.

Dawson, P., Levy, R., Gardner, D. and Walls M. (2007). Simulating the Behaviour of Light inside Arctic Dwellings: Implications for Assessing the Role of Vision in Task Performance. World Archaeology Vol.39 No.1 pp.17-35.

Dawson, P., Levy, R. and Lyons, N. (2011). “Breaking the fourth wall”: 3D virtual worlds as tools for knowledge repatriation in archaeology. Journal of Social Archaeology 11(3) pp.387-402.

Dawson, T., Vermehren, A., Miller, A., Oliver, I. and Kennedy, S. (2013). Digitally enhanced community rescue archaeology. Proceedings of First International Congress on Digital Heritage pp.29-36.

Devlin, K. and Chalmers, A. (2001). Realistic visualisation of the Pompeii frescoes. Proceedings of the 1st International Conference on Computer Graphics, Virtual Reality and Visualisation pp.43-48.

Devlin, K., Chalmers, A. and Brown, D. (2003). Predictive lighting and perception in archaeological representation. UNESCO World Heritage in the Digital Age.

Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.

Forte, M., Lercari, N., Onsurez, L., Issavi, J. and Prather, E. (2012). The Fort Ross Virtual Warehouse Project: A Serious Game for Research and Education. 18th International Conference on Virtual Systems and Multimedia pp.315-322.

Gutierrez, D., Sundstedt, V., Gomez, F. and Chalmers, A. (2006). Dust and light: predictive virtual archaeology. Journal of Cultural Heritage 8 pp.209-214.

Levy, R. and Dawson, P. (2006). Reconstructing a Thule whalebone house using 3D imaging. IEEE MultiMedia. Vol.13 No.2 pp.78-83.

Murgatroyd, P., Crenen, B., Theodoropoulos, G., Gaffney, V. and Haldon, J. (2011). Modelling medieval military logistics: an agent-based simulation of a Byzantine army on the march. Computational and Mathematical Organization Theory Vol.18 No.4 pp.488-506.

Thomas, J. (2004). Archaeology and Modernity. London: Routledge.

Thomas, J. (2004). The Great Dark Book: Archaeology, Experience, and Interpretation. In: Earle, T. and Pebbles, C. S. A Companion to Archaeology. Oxford: Blackwell Publishing pp.21-36.

Tilley, C. (2004). The materiality of stone: exploration in landscape phenomenology. Oxford: Berg.

PART 1 – Visualisation

ggantija

“Visualisation” is a term that is used quite consistently in recent archaeological publications. It refers to the reconstruction of archaeological evidence through the use of computer software, although it originates in the practise of recording the site using 2D drawings which has been around for a few centuries. Although the meaning of the word and what it entails fluctuates somewhat, I’ve come to identify three types of technologies that fall within this category:

  • Photogrammetry
  • Laser Scanning
  • 3D Reconstruction

Photogrammetry is also referred to as Structure From Motion, and differs from the other methods as the 3D models are based on photographs (Pedersini et al. 2000). Similarly to laser scanning, the result is a high density point cloud, with photorealistic textures.

Laser scanning is a powerful tool widely used in large scale recording. It uses laser measurements to calculate the position of points in a site, and like Photogrammetry it produces a textured mesh, although generally laser scanning models are much more dense and therefore more accurate.

3D reconstruction is the method we will be primarily dealing with. It is less accurate than Photogrammetry and laser scanning, and the results are less realistic. It does however possess some distinct advantages. Reconstructed models are easily manipulated, and often represent elements of a site that have been lost (Fig.1). They can also be combined with gaming software to create interactive environments (I could cite many authors here, but just as a taster I would recommend reading Champion et al. 2012).

image 1
Fig.1 = 3D Reconstruction of the site of Ggantija, Gozo.

The three methods have very different aims, and as such it is important to know what you want to achieve before applying them:

  • For small and medium scale recording Photogrammetry is excellent (Scopigno 2012). It is very cheap and fast, and possesses the accuracy and visual effects that are necessary for recording and presenting. It is ideal for cataloguing finds or small scale excavations, although it can be used for larger features if necessary (see the current Must Farm excavation models by Donald Horne for more details: https://sketchfab.com/mustfarm). The fact it possesses lower points than laser scanning makes it easy to manage, and it requires little training.
  • For detailed models and large sites Laser Scanning is the tool of choice. More expensive and computationally challenging than Photogrammetry, laser scanning creates precise models that are perfect for recording, presenting and some interpreting. A great example is the work of John Meneele (https://www.facebook.com/1manscan/) who has been analysing stone decay be comparing models taken in different years. I personally have little experience with laser scanning, but the results I have seen have shown a lot of promise.
  • 3D Reconstruction is mainly for presentation and interpretation. Although some arguments have been put forward on using 3D reconstructions for metadata recording, this is not where the technology shines (Reilly 1990; Barreau et al. 2013). 3D reconstructions can show a site “as it was” rather than “as it is”, leading to a more vivid understanding of archaeological contexts (Miller and Richards 1995; Lewin and Gross 1996). For the general public it is perfect, and it can be made highly interactive in order to further increase user comprehension of the archaeology. As for interpretation, the use of scripting allows a number of tools to be created in order to answer archaeological questions. One of the projects I have been working on was looking at analysing solar alignment at a Maltese site, and through a custom-written script I concluded the site is illuminated on the winter solstice (Fig.2). While Photogrammetry and Laser scanning shine with precision and photorealism, 3D Reconstruction truly dominates in presentation and interpretation.

 

overall image
Fig.2 – Overview of the script interface.

It is important however to mention how these methodologies are not mutually exclusive. Little work has been done in combining different techniques, but the results show much promise. A previous article on this blog discussed virtual museums, and the combination of a 3D reconstructed environment, with Photogrammetric models within.

In conclusion, there is a lot of technology put there, and although research is slowly unveiling the advantages of each there is much to be discovered still. With 3D Reconstruction we are barely scratching the surface, and only in the last 10 years we have had custom written scripts for archaeology. It may take a while, but once we uncover what is possible, archaeology will really reap the results.

 

In the next post I will be looking at previous work in 3D reconstruction, with a few examples of significant projects that have helped shape the methodology.

 

REFERENCES:

Barreau, J., Gaugne, R., Bernard, Y., Le Cloiree, G. and Gouranton, V. (2013). The West Digital Conservatory of Archaeological Heritage Project. Digital Heritage International Congress. Vol.1 pp.547-554.

Champion, E., Bishop, I. and Dave, B. (2012). The Palenque project: evaluating interaction in an online virtual archaeology site. Virtual Reality 16 pp.121-139.

Lewin, J. S. and Gross, M. D. (1996). Resolving Archaeological Site Data With 3D Computer Modeling: The Case of Ceren. Acadia pp. 255-266.

Miller, P. and Richards, J. (1995). The Good, the Bad, and the Downright Misleading: Archaeological Adoption of Computer Visualisation. In: Huggett, J. and Ryan, N. Computer Applications and Quantitative Methods in Archaeology. Oxford: Tempus Reparatum pp.19-22.

Pedersini, F., Sarti, A. and Tubaro, S. (2000). Automatic monitoring and 3d reconstruction applied to cultural heritage. Journal of Cultural Heritage 1 pp.301-313.

Reilly, P. (1990). Towards a virtual archaeology. In: Lockyear, K. and Rahtz, S. Computer Applications in Archaeology pp.133-139.

Scopigno, R. (2012). Sampled 3D models for Cultural Heritage: which uses beyond visualisation? Virtual Archaeology Review Vol.3 No.5 pp.109-115.

Blog Reopened

Dear all,

 

It has been a long long time since I last posted any new articles. It is safe to say that this blog fell into neglect and that I have been less than diligent in my duties as an open archaeology advocate.

The reasons for this have been many. On one part I have was overtaken by my job as a commercial archaeologist. Despite it being an exceptionally useful experience, the skills needed in the field were substantially different from those acquired in university and as such my research into 3D modelling was put on hold. On the other hand, I als0 grew somewhat uninterested in Photogrammetry, which I guess is a bold statement for a blog called Archphotogrammetry.

I shall explain. I truly believe Photogrammetry is a fantastic tool. It is incredibly useful for recording features and recent work has shown much promise in interpretation of archaeological contexts. It also possesses the advantage of photorealism, which is an exceptionally practical advantage when it comes to presentation. My main critiques of Photogrammetry are inconsistency and lack of flexibility.

For anyone using Photogrammetry, you have surely realised by now that even with the most rigorous methods there are occasions in which the models just won’t work. Generally it is possible to adjust the problems by altering the settings or running alternative programs, but this is time consuming, and there is no guarantee of results. Additionally I have found that often the models look complete, but on a closer inspection they have “fuzzy” or incongruous parts. For most uses this is not a large deal, but I find it frustrating when a model has an error I am aware of but that I cannot remove.

Secondly, the nature of Photogrammetry makes the models hard to manipulate. The models are formed of dense clouds of points, with carefully placed textures. Manipulating a single model is exceptionally difficult. If you were to fill a hole, for example, it is difficult to then replicate a consistent texture over the spot. Additionally, there is a whole issue with scales and getting the entirety of the object. Essentially, the model itself is usually the final stage of the reconstruction process. There are of course exceptions, and I would point you to the following authors for more: Gabellone 2009; Itskovich and Tal 2010; Badiu, Buna and Comes 2015;

The reason I am speaking so harshly against Photogrammetry is to explain the scenario I find myself in. I feel I have learned a lot about Photogrammetry over the past years, but I also feel there is more to be gained from Visualisation. For this reason in recent years, and for my current Mphil project, I have been turning towards another methodology, which in reality is a much wider and unexplored field: 3D Reconstruction and Scripting.

I have essentially been using a combination of architectural and graphic design software to create models of the site (not photorealistic) and gaming software and coding to create a set of tools that can be used in archaeology.

As I find myself near the end of my Mphil, I figured I would restart this blog, to share what I have learnt and the advantages I have found to using 3D Reconstruction.I am therefore going to start a series on this topic, which will be more structured, but still accessible. If this is successful, I will then return to wider topics in 3D modelling, and Photogrammetry in general.

I hope therefore that you will start following this blog again, and that you find the topic at hand interesting and stimulating. Please feel free to comment and let me know what you think.

 

Thanks for sticking with me,

 

Rob Barratt

Agisoft Photoscan

If you have been following this blog for some time, you will know that when it comes to Photogrammetric reconstructions I have always been a strong supporter of 123D Catch by Autodesk. I find that it is by far the easiest program to use, yet the results are still amazing.

Recently they have released a paid Pro version that provides all the same results but allows the program to be used commercially. I think that is utterly brilliant, as it means it doesn’t halt research, but it still allows revenue for the developers if the user is himself making money from the software.

Having said all this, I’ve started investigating new software, to see if there is anything that can bring improvement to what I already have with 123D Catch. Up to now, the best solution I have found is Agisoft Photoscan, which I had already used in the past but not to its full extent.

Image

Previously, I only managed to create low quality models, due to an error in the program, but I have now managed to make really good models of both objects and features. As such, here are the pros and cons of Photoscan compared to 123D Catch:

PROS

  • Quality: This is the best starting point. With medium and high settings the quality can be really good, and generally you can get many more points than 123D Catch. With lower settings however the results do fluctuate.
  • Control over process: If you are looking into more complex Photogrammetry this passage is very important. In 123D Catch you upload the images and get the result. That is it. In Photoscan you can go through all the stages (photo alignment, sparse cloud, dense cloud, mesh, texture) and change the settings to improve the finished product. You can even export the points as they are, and alter them using other software. It allows much more flexibility.
  • More tools: You can select points, create masks, remove points based on certain parameters and more. Often these are not needed, but on occasion they can be just what you require.
  • Many photos: 123D Catch struggles when you upload more than 50 photographs, and the results suffer. In Photoscan this issue doesn’t exist, and you can easily make models with hundreds of images. This is perfect for making large scale models, as the more images you have, the more accurate it is.
  • No internet required: Often you find yourself in situations in which the internet is not great or is totally non-existent. Photoscan doesn’t need a connecetion to an external server to process the model, so even if the computer is not connected, it still works.

Cons

  • Paid software: Although as software goes this is well priced, for people doing non-commercial work it is always difficult to have to keep up with the expenses for programs.
  • Memory: 123D Catch uses an external server, which means it does not use your own computing power. If you are trying to get high quality models, Photoscan will put a serious strain on your computer, and often it will even crash due to insufficient memory.
  • Time: I can get a model done in 123D Catch in 5 minutes if the internet is good. The worst I have ever had is probably an hour. With Photoscan I have had to wait actual days for models to be complete, and sometimes I waited many hours before getting an insufficient memory message.
  • Simplicity: If you are just starting out with Photogrammetry 123D Catch is still the easiest of all programs I have used.

Overall, I think I would use 123D Catch for small scale and quick stuff, while Photoscan would be useful for large scale, or when I am trying to research something specific, so I am more in control of the results.

Image

Photogrammetric Recording of Subvertical Pits

Image

Up to now in my blog I have been trying to outline the uses of Photogrammetry in the two main areas of archaeology, recording and interpretation. Some things I have discussed were specific to preserving as much data as possible of an archaeological feature or object, by creating a virtual copy of it. Other posts were concerned with what can be done once that model has been made, to further our understanding.

This post is mostly about recording a specific type of feature, but it opens up some possibilities to help interpret them as well.

On some occasions during archaeological excavations we happen to stumble across some particular pits that present particular difficulty when planning. In these cases the issue is that the sides of the pit are not gradual or even vertical, but they actually overcut the side, giving it a bulging shape. During an excavation at Ham Hill, Somerset, one of the pits there had this particular shape due to it’s use. It was probably used to store grain, and the presence of a smaller hole at the top meant that the preservation would have been better.

Image

The plans draw of the pit were excellent, but even so it is difficult to convey the true shape of the feature using only 2D resources. I therefore created a model of it, by taking photographs like I normally would for a regular feature, with the addition of a few more from within the feature itself, by dropping the camera within it. The results are as follow:

 

Not only can you view the feature from the top, but it is even possible to see it from the sides, and rotate it that way, making it ever so clear how the feature was even now it is gone.

Image

In addition to that, the bulge is much clearer, and it is easier to draw conclusions on its use. As a permanent record it is excellent, as not only do we not loose any information, but we can also gain more than what we could see being limited by the simple top view.

It also opens up new possibilities. As of yet I have not experimented much with Maya 3D, however I have had a brief overview of how the program works and what it is capable of. One of my colleagues once showed me how to reconstruct a pot from the profile, and then proceeded to calculate the capacity of the finished pot. Theoretically speaking it should be possible to import the finished model of the pit in Maya, and then use the same algorithm to calculate how much grain the pit could have had at a time, which could help understand the density of the population of the area, as well as a lot of other interesting questions.

And the technology doesn’t stop here. This may be a very specific example, but the same ideas can be applied to many different kinds of features. Those with particular bases can be easily recorded by making a model of them, or stone structures can be perfectly copied digitally rather than only drawn by hand. There is still a lot of applications to discover.

Image

Bigger And Better: Photogrammetry Of Buildings

Image

Photogrammetry is definitely the “new” thing in archaeology, slowly clawing its way into how we treat and record archaeological sites. As far as its potentials go though, there is still a lot of research to be done, in order to assess the uses that this technology has, but also the limits that we will have to deal with on a day to day basis.

One of the aspects that has always interested me is a question of scale. We’ve seen before that Photogrammetry deals well with anything ranging from an arrowhead to a small trench (4×6 is the maximum I have managed before, but there is much larger stuff out there). Smaller objects are hard to do cheaply, as the right lenses become essential, but larger stuff should be entirely possible to do with the right considerations in mind.

123D Catch’s website shows quite a few examples relating to Photogrammetric reconstruction of buildings, so I tried reconstructing my own, taking as a subject the front façade of Saint Paul’s cathedral in London.Given that this was mostly for experimental purposes, I just attempted a few things and went through the results looking for patterns and suggestions.

The results can be viewed here: 

Saint Paul Cathedral (click to view in 3D)

 

Saint Paul Cathedral

As we can see, the results are of course not marvellous, but I am less interested in the results than the actual process. I took 31 photographs of the building, standing in a single spot, taking as many pictures as necessary to include all parts of the façade and then moving to a slightly different angle. I tried to make sure that I got as much of the building in a single shot as possible, but the size of it and the short distance at which I was taking the photographs meant that I had to take more than one shot in some cases.

Image

The lighting was of course not something I could control, but the fact that it was late afternoon meant that it was bright enough to be  visible, yet not too light that would bland the textures and cause problems with contrast. I then used 123D Catch to process the photographs, and to my surprise all of them were included in the final mesh.

The thing that surprises me the most is that given the photographs I had, the results are actually as good as my most hopeful prediction. There is a clear issue with the top of surfaces i.e. top of ledges that come out as slopes. This is totally expected, due to the fact that usually Photogrammetry works with images taken from different heights, while in this case the height couldn’t change. This however can be solved by taking images from the surrounding buildings.

The other major problem is the columns, that are in no way column shape, and that seem to mess up the structure behind them as well. Given the complexity of the structure this is also expected. 123D Catch imagines the mesh as a single flat surface, and tries to simplify whatever it finds as much as possible. In this case the columns are not flat, so the solution 123D Catch has come up with, given the limited data, is to bring forward the murky back and treat it as if it was the space between the columns. This is due to a large lack of data. Next time the solution will be to concentrate more on these trouble areas and take more numerous and calculated shots that can aid 123D Catch. It does require some more work and some careful assessment of the situation, but it is entirely possible to achieve.

Apart from these problems, the results are very promising. More work needs to be carried out, but it shows that it is possible to reconstruct structures of a certain size, hence pushing once again the limits of Photogrammetry.

Image

Recreating Tower Of London Graffiti Using Photogrammetry

Last weekend I visited the Tower of London, which gave me a great opportunity to try out some of the Photogrammetry ideas I have had in the past few weeks.

graffiti new 2 8

Apart from testing 123D Catch out on large monuments and entire palace façades, I decided (thanks to the suggestion of Helene Murphy) to try modelling some of the graffiti that were made by the prisoners there. The main aim of this was to see if I could create models using photographs from dimly lit rooms, but also to see how it would react to glass, as the inscriptions were covered by panes to protect them. I also wanted to retest the Radiance Scaling tool in Meshlab, to increase the crispness of the model and see if it increased the accuracy of it.

I concentrated on 3 different pieces of graffiti that can be viewed here (embeded thanks to the suggestion of alban):

 

Tower Of London – Graffiti 1 (click to view in 3D)

Graffiti 1

 

Tower Of London – Graffiti 2 (click to view in 3D)

Tower Of London - Graffiti 2

 

Tower Of London – Graffiti 3 (click to view in 3D)

Tower of London - Graffiti 3

The models turned out better than expected. The glass is entirely invisible, which required some planning when taking the photos, but gave no problems in the modelling. This is particularly good as it means it should be possible to apply the same concept to artefacts found in museums behind glass.The lighting conditions themselves didn’t cause any issue that may have been expected, showing once mor the versatility of Photogrammetry.

Running the Radiance Scaling shader in Meshlab also returned really interesting results. In all cases the models become much more aesthetically pleasing, while at the same time the curves and dips are emphasised for increased crispness of the model. Although this to me seems to be a forced emphasis, that may cause a lack of accuracy, the results at the moment seem to suggest it may increase in some ways the accuracy instead, This however needs to be further explored.

graffiti new 2 6 graffiti new 2 7 graffiti new 2 4 graffiti new 2 5 graffiti new 2 1graffiti new 2 2