Part 4 – SketchUp Basics

In the last sections we have been looking at a bit of the theory concerning 3D Reconstruction. Hopefully we have set the foundations for discussion in site presentation and interpretation that we will return to in further sections. It is however time to start talking about one of the most commonly used programs for reconstruction, Google SketchUp.


Fig.1 – An overview of Google SketchUp.

The first question is why SketchUp. Out of all the software that is available, I think this is the easiest to learn. Programs such as 3Ds Max have many advantages that set them aside, yet for all the limitations SketchUp may have it is perfect for those who have little knowledge of modelling software. The range of tools is enough to create fairly accurate models, and combining it with V-Ray produces good quality 2D renders. Additionally, the interface is intuitive. The only elements I would criticise are rounded surfaces and textures, which are much harder to work with and may require additional software such as Photoshop.

For navigation, Google Sketchup utilises a combination of a panning, a rotating and a zooming tool. These are used to move around the scene without interfering with the model itself. Similarly, a moving, a rotating and a scaling tool allows manipulation of an object in the scene. The main feature to use though is the push/pull tool, that allows simple surfaces to become three dimensional elements.

When modelling a site, I tend to start from a plan. It is in fact much easier to start from the ground surface upwards. you can easily drag and drop a plan into the scene. If you want the model to be to scale, the plan itself can be adjusted by using the measuring and scaling tools accordingly. Once the plan is in place it is possible to trace over all major features, to start creating a floor plan of site. The ‘pencil’ can be used to trace over most lines, but the rectangle, circle and arc tools are also essential for some portions of the plans.Curved surfaces cause some issues with the software, so I would tend to use smaller straight lines whenever possible.


Fig.2 – Very simple pencil drawing over a 2 dimensional plan of a stone circle.

Once the plan is sketched out, it is time to push and pull surfaces. With any luck, the surfaces should have closed automatically, by finishing off every line at the same point it started. In some cases the line tends to snap to the axis, so beware if it becomes a certain colour before clicking. The push/pull tool allows you to extend the selected surface outward or inwards, and it can be used to give objects height or depth. In the bottom corner SketchUp informs you on the distance that the surface is being extended, so it is possible to apply precise calculations. Most of the reconstruction process involves drawing surfaces upon other surfaces and pulling/pushing them to create the details of the model.


Fig 3 – One of the stones after having used the push/pull tool.

One of the functions however that I find the most useful is the ‘component’ tool, which divides different elements into compact objects that don’t interfere with one another. Often pulling a surface over another can cause the two to become conjoined, making it difficult to modify the specific part after the initial modelling. By selecting a single entity in the model (for example, a table) and making it into a component you can then move it freely without worrying about the rest of the model, as well as copying it without having to remodel it. Ideally, every different part of the reconstruction should be its own component, part of a hierarchy. Larger components are themselves composed of smaller components, so that it is possible to operate at many different levels. Using components efficiently also simplifies the transition between SketchUp and Unity3D. Bear in mind however that components that have been copied will maintain a connection with the original, and any change that is applied to a single ‘instance’ of a component will also be applied to all other copies.


Fig.4 – A rough component of a megalithic stone, made using lines and push/pulling.

In the next part I will continue the discussion regarding Google SketchUp modelling to incorporate detail building, rendering and materials.

Blog Reopened

Dear all,


It has been a long long time since I last posted any new articles. It is safe to say that this blog fell into neglect and that I have been less than diligent in my duties as an open archaeology advocate.

The reasons for this have been many. On one part I have was overtaken by my job as a commercial archaeologist. Despite it being an exceptionally useful experience, the skills needed in the field were substantially different from those acquired in university and as such my research into 3D modelling was put on hold. On the other hand, I als0 grew somewhat uninterested in Photogrammetry, which I guess is a bold statement for a blog called Archphotogrammetry.

I shall explain. I truly believe Photogrammetry is a fantastic tool. It is incredibly useful for recording features and recent work has shown much promise in interpretation of archaeological contexts. It also possesses the advantage of photorealism, which is an exceptionally practical advantage when it comes to presentation. My main critiques of Photogrammetry are inconsistency and lack of flexibility.

For anyone using Photogrammetry, you have surely realised by now that even with the most rigorous methods there are occasions in which the models just won’t work. Generally it is possible to adjust the problems by altering the settings or running alternative programs, but this is time consuming, and there is no guarantee of results. Additionally I have found that often the models look complete, but on a closer inspection they have “fuzzy” or incongruous parts. For most uses this is not a large deal, but I find it frustrating when a model has an error I am aware of but that I cannot remove.

Secondly, the nature of Photogrammetry makes the models hard to manipulate. The models are formed of dense clouds of points, with carefully placed textures. Manipulating a single model is exceptionally difficult. If you were to fill a hole, for example, it is difficult to then replicate a consistent texture over the spot. Additionally, there is a whole issue with scales and getting the entirety of the object. Essentially, the model itself is usually the final stage of the reconstruction process. There are of course exceptions, and I would point you to the following authors for more: Gabellone 2009; Itskovich and Tal 2010; Badiu, Buna and Comes 2015;

The reason I am speaking so harshly against Photogrammetry is to explain the scenario I find myself in. I feel I have learned a lot about Photogrammetry over the past years, but I also feel there is more to be gained from Visualisation. For this reason in recent years, and for my current Mphil project, I have been turning towards another methodology, which in reality is a much wider and unexplored field: 3D Reconstruction and Scripting.

I have essentially been using a combination of architectural and graphic design software to create models of the site (not photorealistic) and gaming software and coding to create a set of tools that can be used in archaeology.

As I find myself near the end of my Mphil, I figured I would restart this blog, to share what I have learnt and the advantages I have found to using 3D Reconstruction.I am therefore going to start a series on this topic, which will be more structured, but still accessible. If this is successful, I will then return to wider topics in 3D modelling, and Photogrammetry in general.

I hope therefore that you will start following this blog again, and that you find the topic at hand interesting and stimulating. Please feel free to comment and let me know what you think.


Thanks for sticking with me,


Rob Barratt

Agisoft Photoscan

If you have been following this blog for some time, you will know that when it comes to Photogrammetric reconstructions I have always been a strong supporter of 123D Catch by Autodesk. I find that it is by far the easiest program to use, yet the results are still amazing.

Recently they have released a paid Pro version that provides all the same results but allows the program to be used commercially. I think that is utterly brilliant, as it means it doesn’t halt research, but it still allows revenue for the developers if the user is himself making money from the software.

Having said all this, I’ve started investigating new software, to see if there is anything that can bring improvement to what I already have with 123D Catch. Up to now, the best solution I have found is Agisoft Photoscan, which I had already used in the past but not to its full extent.


Previously, I only managed to create low quality models, due to an error in the program, but I have now managed to make really good models of both objects and features. As such, here are the pros and cons of Photoscan compared to 123D Catch:


  • Quality: This is the best starting point. With medium and high settings the quality can be really good, and generally you can get many more points than 123D Catch. With lower settings however the results do fluctuate.
  • Control over process: If you are looking into more complex Photogrammetry this passage is very important. In 123D Catch you upload the images and get the result. That is it. In Photoscan you can go through all the stages (photo alignment, sparse cloud, dense cloud, mesh, texture) and change the settings to improve the finished product. You can even export the points as they are, and alter them using other software. It allows much more flexibility.
  • More tools: You can select points, create masks, remove points based on certain parameters and more. Often these are not needed, but on occasion they can be just what you require.
  • Many photos: 123D Catch struggles when you upload more than 50 photographs, and the results suffer. In Photoscan this issue doesn’t exist, and you can easily make models with hundreds of images. This is perfect for making large scale models, as the more images you have, the more accurate it is.
  • No internet required: Often you find yourself in situations in which the internet is not great or is totally non-existent. Photoscan doesn’t need a connecetion to an external server to process the model, so even if the computer is not connected, it still works.


  • Paid software: Although as software goes this is well priced, for people doing non-commercial work it is always difficult to have to keep up with the expenses for programs.
  • Memory: 123D Catch uses an external server, which means it does not use your own computing power. If you are trying to get high quality models, Photoscan will put a serious strain on your computer, and often it will even crash due to insufficient memory.
  • Time: I can get a model done in 123D Catch in 5 minutes if the internet is good. The worst I have ever had is probably an hour. With Photoscan I have had to wait actual days for models to be complete, and sometimes I waited many hours before getting an insufficient memory message.
  • Simplicity: If you are just starting out with Photogrammetry 123D Catch is still the easiest of all programs I have used.

Overall, I think I would use 123D Catch for small scale and quick stuff, while Photoscan would be useful for large scale, or when I am trying to research something specific, so I am more in control of the results.