Initial Uses of 3D Printing in Archaeology


3D printing is the new thing, no doubts about it. There is so much potential to be unleashed with this technology, and finally we are breaking through the last barrier that stops us from 3D printing every day, which is cost. I wrote an article a month ago about the subject, and already the price for a basic 3D printer has halved, from 1500 £ to 700, and it is bound to decrease even more with the end of the patent which should be next year. Soon every household will be able to print out designs downloaded from the internet of any object they may desire, and with scientist at work on printing food and many other things, the possibility are endless.

Given this boom in interest and popularity, and the detail of which 3D printers are capable of, it would be foolish to think that the archaeological world can avoid being swept in. From exact replicas of artefacts, to miniature sites for display, we are soon going to be treated to new ideas in archaeology.


Some of these ideas have already started producing some results, and one of the most interesting articles I have found is this one:

I won’t go into detail on the background, as you can read the article yourself, but the main story is that a group of archaeologists have managed to 3D print mummies using x-ray images, therefore leaving the bones within the bandages.


The real thing to notice here is the beautiful detail achieved by the archaeologists involved. The skeletons are perfectly replicated, leaving little to interpretation and preserving something that may easily get damaged if unravelled. I’m assuming that the best approach in this case would be using a CT scan to get the 3D model, rather than from a series of simple X-rays, as these would be too inconsistent to work with. This does create the problem of having to get this type of equipment for archaeological use.


This experiment however is important for one main reason: it is something we could not do before. Often with new technology the problem is that people see it as technology for technology’s sake, as in something without an actual practical use that we do simply because we can. Recreating skeletons of mummies without damaging the actual bones relies entirely on 3D printers, and it is not possible to find any traditional approach to it. It therefore shows that the potential is there and it can bring innovation.

Working on the Sphinx in 123D Catch

The other day I was talking with one of the guys at work and while chatting about Photogrammetry an idea came to mind. There are millions of photographs online taken by tourists who are visiting archaeological sites, and famous monuments like the Parthenon or the Pyramids surely have their photograph taken many times a day. Therefore there is an enormous bank of images of archaeology available at our fingertips, and if it is possible to reconstruct features and structures using as little as 20 images, surely there must be a way to reconstruct through Photogrammetry these monuments without the need to even be there.

As a result of this pondering I decided to test this idea by choosing a building that would be simple enough to reconstruct, yet famous enough to allow many images to be found. A few months ago I had started working on the reconstruction of Stone Henge from a series of photographs I had found online, but with scarce (yet useful) results. Hence I decided to try with something solid, like the Great Sphinx.

I then proceeded to find all possible images, collecting approximately 200 of them from all angles. Surprisingly the right side of the monument had many less pictures than the rest of it. I then run the photos in different combinations in an attempt to reconstruct as accurate as a model as possible.

The following is the closest result I have had yet:


Agreed, much more work is needed. But honestly I am surprised that I managed to get this far! The level of detail is quite surprising in some places, and if half of it has worked the other half should also be possible. The matter is finding the right combination of images.

So far I have tried:

  • Running only the high quality images.
  • Running only the images with camera details.
  • Running 20 photos selected based on the angles.
  • Creating the face first and then attaching the rest of the body.
  • Running only images with no camera details (to see if any results appeared, which they did).
  • Increasing the contrast in the images.
  • Sharpening the images.
  • Manually stitching the remaining images in.

All of these have produced results, but lesser than the one above. Also, I have tried running the images in Agisoft Photoscan with similarly poor results.

The model above was done by pure coincidence by uploading 100 of the images, so I’m hoping by going through the images selected I may be able to find a pattern of selection.

I shall post any updates here, so stay tuned!