Ham Hill Iron Age Skeletons Turn Digital

Image

Three of the skeletons found at the site of Ham Hill, Somerset during the 2013 excavation are now available to view online at the following links:

https://sketchfab.com/show/70d864c4736435710bc65b6f21d81c03

https://sketchfab.com/show/821565c7ce0b98e1b7764c73a9f07492

https://sketchfab.com/show/fa694aff0fb5949e2f396a5fb2da37b0

The skeletons were discovered during this year’s excavation carried out by the Cambridge Archaeological Unit and Cardiff University, at the site of an important Iron Age hill fort. They are only some of the many human remains found, some of which were carefully placed within a number of pits, while others were located symbolically within an enclosure ditch, often with body parts missing.

Image

The models themselves were made using Photogrammetry and specifically 123D Catch, which required very little time for quite good quality. The aim of this was to preserve a record of the skeletons in situ for further interpretation once they had been removed from the location they were discovered in.

Given the complexity of the subject, the results seem to be very promising. Time was a big factor when making the models, as they had to be removed before damage occurred. Some of them were also in tight spots, which made it hard to access some of the standard angles, but overall this was counterbalanced by taking a larger number of photographs than normally (around 30 per skeleton). The lighting conditions also proved to be ideal, as there was an overcast sky, but also sufficient sunlight coming through to really emphasise the surfaces.

Image

For further information on the skeletons I suggest reading the article at http://www.bbc.co.uk/news/uk-england-somerset-23928612

Advertisements

Using Iphone Camera for Photogrammetry

I mentioned before I recently received an Iphone 4s, and having been a strong supporter of Windows against Apple, I am slowly being converted over. Apart from the great thing of being able to carry my models around and show fellow archaeologists without risking the life of my laptop, I have started exploring the advantages of having a pretty good camera on me at all times.

By using the 123D Catch app, it is possible to instantly create amazing models wherever you are, but how accurate are 3D models made using the Iphone camera? I don’t have the app itself due to a lack of 3G, but I have been going around site the last week or so and have taken a number of photographs and then processed them once got home.

Once again I experimented with larger and smaller objects and features, comparing the results I got with those done with regular SLR cameras. I can’t actually upload the images due to site restrictions, but I created a model of a toy car as an example. I followed the usual methods of recording, so to not alter the results in any way.

Image

These are some of the points I have found:

Image stitching: Comparing the number of images that stitched in normal models and those done with the Iphone’s camera there is a bit of a difference between the two. Especially with similarly coloured images only some of the images are stitched together. This however happened only on a few occasions and as such doesn’t constitute a major flaw.

Point cloud: The number of points within the models done with the Iphone seem to be equal, if not more than those in the normal photographs. I believe this is because the Iphone seems to adjust the colours and lighting automatically and digitally, which makes the photographs seem more consistent. On the other hand this also seems to have the negative effect of artificially changing the images, thus playing with the contrast and colour balance, which affects the accuracy of the model.

Image

Textures: The textures in the Iphone models seem to be extremely good, probably due to the digital adjustment mentioned above. In this case I wouldn’t say this is a problem, and the results are quite bright and distinct, which is a good thing when analysing the models.

General look: This is the point I have the greatest issues with. The number of keypoints the program finds made me expect extremely crisp models, but they look to me much more murky than they should. The digital altering of the images, and the fact that the size of the images is below the 2 MBs makes the model much less accurate, and the results suffer greatly.

Image

Overall though I am happy with this method. If the models were of extreme importance, I wouldn’t even consider using the Iphone camera, but for simple and less important models it is perfect. The practical nature of being able to capture images in a matter of minutes and have them upload directly to my Dropbox is great, and on more than one occasion I’ve been caught without my camera, so it is a great alternative.

Some Issues with Photogrammetry

This is based on some of the work I did for my dissertation. I realised that as it stands it isn’t likely to be published, so I thought I should at least share some of the concepts and ideas that I used for it.

Creating Photogrammetric models for archaeology can be a simple process, but there are some cases in which problems may rise, due to the shape of the objects or the type of surface. We’ve already seen how large features can require certain considerations when photographing, to simplify the work of 123D Catch.

But there are other issues as well:

  • Curved surfaces: If for example we are looking at a pot fragment and we are photographing it from the inside and the outside, the fact the surfaces are concave and convex rather than flat means we may have some difficulties. In general, convex surfaces are easier to manage than concave surfaces, as, provided that the angle of the two spirals of photos is big enough, the pattern of points is such that it provides sufficient information to the program to deal with the surface. If, however, the surface is concave then the same does not apply, and the outcome of the rendering depends on the size of the object. A large surface will generally provide sufficient detail for the program to recognise it and deal accordingly, but a smaller sample often does not have the same results, and the only course of action is to adjust the illumination to create as much contrast on the surface as possible, although this also does not always work. Similarly, if the object has elements which overlap in the photos, these can present some problems, as the program often assumes that they are on the same level. In this case, it may be necessary to increase the number of photos input to help clarify, as well as manually unstitching and stitching all photos that have not been properly placed.

Image

  • Translucent surfaces: These can cause problems with the camera, as they alter the contrast depending on the angle and lighting. In this case, it is important to make sure that the lighting is uniform and the object fixed, hence making it necessary to circle the object rather than rotate it.
  • Uniform texture: If the colour of the object is too similar throughout, thus not providing enough difference and contrast to allow the selection of keypoints, adjusting the light to bring out as much contrast as possible may assist in the creation of the model. One particular case of this is objects that have similar sides, like the image below that looked very similar from different angles. On the other hand, 123D Catch seems to cope quite well with this case, allowing the smallest of details to be picked out from the texture.

Image

There are of course many other cases in which we have problems using 123D Catch, but these described are some of the most common. The fact that these issue exist though doesn’t mean that the software is extremely faulty or useless, as we are dealing with some rare exceptions, while generally models should be produced easily and with none of these issues.

Program Overview: Meshlab

Image

When writing my third year dissertation a few months ago I analysed a basic method of creating photogrammetric models using 123D Catch, and when it came to discussing the later editing of the models the program I turned to was Meshlab.

I’d originally come across this program when looking at the Ducke, Scores and Reeves 2011 article, as part of a more complex method to actually create the models, but had concluded that it would have a much better use as a separate process. This is due to the vast variety of tools and filters that the program employs, which surpassed those of any other freeware 3D program I knew. I talk in the past as I’ve since changed my mind, at least in part.

Image

But before talking about the problems, I’ll go through the advantages:

  • Easy interface: Simple operations are easy to do in Meshlab, and the navigation is efficient (the zoom is counterintuitive, still easy to adapt to it). Loading models is simple and it does allow a lot of file types to be used, plus changing between points and surfaces requires a simple button click.
  • Nice lighting: Far from complex or complete, the lighting tool is somewhat primitive, but possible for the best. Programs like Blender or Sketchup make me go insane when you just want to emphasise a certain area, while Meshalb has a light you can rotate to show certain elements. It also makes the model’s edges come to life, increasing the contrast and bringing out things hard to see otherwise. When I made a photogrammetric model of a Zeus head statue, some of the other archaeologists with me suggested by looking at the Meshlab light effect, that it may well be a Neptune head instead.
  • Alignment tool: I think this may be due to my ignorance of Blender, but I still prefer the way Meshlab glues together two models, by selecting 4 identical points between them. It adapts the size and orientation automatically, with good results.
  • Great filters: I wrote an article about using the APSS filter to bring out inscriptions, and there are many more available, with references to documentation about them. Some are a bit useless for what I’m trying to achieve, but still not bad at all.

Image

  • Free: Meshlab is entirely open source, which is great for making Photogrammetric models. Compared to program’s that are commercially available, it still does what you need it to do, without having to pay the money.

There are however some problems:

  • No undo button: Probably not the major issue, but by far the most annoying. The number of times I’ve had to start a project from the start due to a minor issue makes this a big flaw. Unless you save every time you do anything, you’ll find yourself in trouble.
  • Saving glitches: Quite a few times I’ve saved the final product, closed the program and opened it again to find it had not saved textures. This is a difficult thing to get around when we are dealing with Photogrammetric models, where the texture is essential.
  • No image files: Meshlab seems to hate all image files, which makes it difficult in some scenarios. For example it is not possible to open a models and an image in the same context. I found this frustrating when gluing together some features from then same trench, where I wanted to use the plan of the trench as a guide to the positions.
  • No real rendering: Blender is great to create good images of the objects you’ve created, while with Meshlab all you can do is use the windows Snipping Tool. Presentation wise it is deficient.
  • Inefficient tools: In some tool Meshlab excels, in other it is lacking. The measuring tool exists, but in order to set a reference distance you have to go through a series of calculations that you personally have to do. The rotate tool is odd, and the transpose tool only seems to apply to a single layer.

Overall Meshlab can get the job done, and in some things it can work better than with other programs such as Blender. However the issues that derive from the fact it is freeware makes it hard to use. I would still recommend it, but when it can be avoided it’s probably for he best.

Image

References:

Ducke, B., Score, D. and Reeves, J. (2010) Multiview 3D Reconstruction of the Archaeological Site at Weymouth from Image Series. Computers & Graphics, 35 . pp. 375-382.

New Things I’ve Learnt for Google Sketchup

Image

If you read my blog yesterday I posted an article about creating a virtual museum using 123D Catch and Google Sketchup Pro. Apart from the large scale project, this has also given me a chance to play around more with these programs, and especially with Sketchup. As a result I’ve learnt a few more skills I’d like to share with you.

Sketchup is a brilliant program, and in my opinion the easiest and most efficient 3D modelling software for archaeology. For other uses it could be a but simplistic, especially when realism is an issue, but for creating models of sites or structures for display it is sufficiently capable. I currently have Sketchup Pro, as well as V-Ray for rendering images, but there is a free version of Sketchup that has most of the functions of the Pro version and that is sufficient for most models. The tips in this article though are based on the Pro-Vray combination, although some of them should still be available in the free version.

Materials: One of the things I was trying to achieve in the museum was creating little tags for objects, in which I could display information. Because I didn’t really need to write the actual text, to see if it would work I got an image of text from the internet and then loaded in the paint tool. I’d done this before, but I was having trouble making the text fit perfectly. After trying different approaches I noted there was a “position” material tool when I right clicked the surface. This opened up a nice interface which allowed me to successfully position the image. Saved me a lot of time, and it has good potential for other things too. One of the ideas I was thinking was to create a large circular wall around a site onto which I could paint a landscape, so when it was rendered it gave the impression of being place in the real world. The positioning tool would allow me to manipulate this texture in order to make it continuous, without having to play around with the scale.

Image

Lighting: I finally used the lighting tool with V-Ray, which was harder to understand than I thought, This was the first time I created an entirely internal space, and lighting was an issue. By creating a long thin rectangle, and placing the light on one side I created a convincing neon light. The trick here was increasing the intensity to 150, rather than the measly 30 it is at, which makes it seem like it is not working. I am also honestly impressed with this tool, it does add a lot to the realism, and I’m even thinking of adding a few lights under objects to give them a lit up effect.

Image

Walk tool: I usually pan and zoom and use the orbit tool when I’m editing and when I’m showing people my work, but the walk tool is a much nicer way to present a model. It allows the user to feel like they are present on the site rather than an external viewer, and it’s much less shaky than I thought, You can even make it so that it will automatically stop when you encounter a surface which is great, considering on occasion this is an issue with the orbit, pan and zoom tools.

Obj importing:  Didn’t realise that Sketchup could import object files, which is great. Many 3d programs have difficulty with the material files associated with the 123D Catch obj files, yet Sketchup seems to have none. It is hard to alter the files themselves, yet it’s a good way to put them in context. A feature could be placed within the reconstructed model of the site, and if all of the features were recorded they could be glued together to make the entire site.

Image

Glass: I made a few glass cabinets for the virtual museum and then tried making them transparent. I painted them light blue and then set the opacity to 18, making them only slightly visible. When rendering the images, this created a big problem, as for some reason the outer side of the glass was transparent, while the inner was reflective, creating an odd mirror effect. I had to Google this one, and the solution seems to be to give the surface at least a bit of thickness. By creating a 3D pane of glass, rather than just a 2D surface of glass the problem is solved, with realistic effects

Image

Some of these tips are probably obvious, however being self taught in the program means that I have gaps in some areas. However the good thing of Sketchup is it requires little knowledge, just a keen interest.

Virtual Museums: Combining 3D Modelling, Photogrammetry and Gaming Software

I wrote the post below yesterday night, but since it was written I’ve managed to create at least a part of what is described in the text, which is shown in the video above. Hence keep in mind that the rest of the post may be slightly different from what is in the video.

One of the more popular posts I’ve published seems to be the one about public engagement at Caerau, South Wales, in which I created an online gallery with the clay “Celtic” heads school children made. The main concept that I was analysing in the text was the idea that we could create digital galleries in which to display artefacts,

When I wrote the word gallery I imagined the computer definition of gallery, as in a collection of images (or in this case models) within a single folder. However I have since found this: http://3dstellwerk.com/frontend/index.php?uid=8&gid=18&owner=Galerie+Queen+Anne&title=1965%2C85%C2%B0C

This is an example of what the website http://3dstellwerk.com offers, an opportunity for artists the create a virtual space in which to display their work. It allows users to go “walk” through the gallery and view the 2d artwork as if it were an actual exhibition. Although the navigation may require a little improvement, it is a brilliant idea to make art more accessible to people.

Virtual Museum

This idea however could easily be adapted for archaeology, using Photogrammetry, Making models of a selection of artefacts using 123D Catch, we can then place them within a virtual space created with our 3D software of choice, in order to then animate it using gaming software such as Unity 3D which would allow user interaction. A large scale project could even allow the objects to be clicked in order to display additional information, or create audio to go with each artefact. Video clips could also be incorporated within the virtual space.

Virtual Museum 2

On an even larger scale this could mean we can create online museums available to all and with specific goals in mind. As we are talking of digital copies of objects, it would be possible to group in a single virtual space a number of significant objects without having to physically remove them from their original location.

The only problem that we may encounter with this idea is file size, as each photogrammetric model is relatively small and manageable, yet if we want a decent sized virtual museum we are going to need a large portion of data. Still, even if the technology at present is not quite capable of dealing with the bulk, the rate at which it is improving will allow such ideas to be doable in the near future.

Virtual Museum 3

The Winged Victory of Samothrace: Analysis of the Images

This is a continuation of my blog from yesterday, so I suggest you read that first. One of the things I’ve been working on the past month is creating a photogrammetric model of monuments using nothing but tourists’ photographs. After many attempts my last test, using the Winged Victory of Samothrace as a model, seemed to have sufficiently good results, so much that I decided to analyse the image data to pinpoint what kind of photographs actually stitch together in 123D Catch, and which ones give problems. This way we can actually choose a limited amount of good photographs, rather than a vast number of mixed ones.

Image

In order to understand the patterns behind the stitching mechanism, I created an Excel file in which I included all images with certain details: width and height, if the background is consistent with the majority of the photographs, if the colour is consistent, if the lighting is the same, the file size, and the position of the object. The last one is based on the idea that to make 3d models we need series of shots from 8 positions, each at 45 degrees angle from one another. If we are thinking of it like a compass, with North corresponding to the back of the object, position 1 is South, 2 is SW, 3 is W, and so on up to 8 is SE.

The first thing I noticed, which ties in with what said yesterday, is the lack of photographs in positions 4 and 6 (NW and NE), which of course meant that all the images from the back (position 5) also had trouble stitching. Therefore the first problem for the model is the need of enough images from all angles, without which it is inevitable to have missing parts. This is made harder by the fact that these are typically positions that are not photographed.

Having concluded that images from position 4-5-6 had this reason for not stitching, I removed them from the list so the data would be more accurate for the others.

I then averaged the height, width and file size of the stitched images and those of the unstitched images and then compared them. The former had an average height of 1205.31, a width of 906.57 and a file size of 526.26, while the latter had a height of 929.07, a width of 668.57 and a file size of 452.57. The differences here are enough to suggest that large good quality images have a higher chance of being stitched. This may seem obvious, but actually some images that have not stitched are larger and higher quality than some which have, so this can’t be the only factor. Also, the difference isn’t large enough to suggest it is even a key factor.

Image

Image

The next step was analysing the percentage of stitched images that had abnormal background, colour and lighting to the unstitched ones, which had even more limited results. In the former, the background was not constant in 15% of the images, the colour in 63% and the lighting in 47%, while in the latter background was 0%, colour was 50% and lighting 50%. Somehow these results show that the photographs have a higher chance of being included if they are different from the rest, which goes against all the knowledge we have up to now of 123D Catch. I therefore suspect that the program allows a good tolerance when it comes to these elements, and that I may have been to harsh in defining the differences in these elements.

Having concluded little with this methodology I decided to look at each individual unstitched image to see what elements could be responsible. This proved to be much more successful. I managed to outline three categories of unstitched images which explain every single one of the photographs, without any overlap with the stitched ones:

  • Distant shots: Some photographs were taken at a further distance from the statue than others, and while a certain degree of variation is acceptable, in these it was too extreme.

Image

  • Excessive shadows: Although as we’ve seen before the lighting didn’t appear to be a factor, some of the images had an extreme contrast, with very unnatural light. They were practically at the edge of the scale, and while some variation is acceptable, these were well beyond that.

Image

  • Background: this is an interesting case in which the background is not different  rom the rest, but has a very similar colour to the rest of the object. Because of the similarities it is difficult for the program to recognise any edges, which makes it impossible to be stitched correctly.Image

Therefore creating 3d models from tourists’ photographs is entirely possible, as long as we have sufficient angles, photographs from a similar distance, without harsh lighting and with a contrasting background.

The Winged Victory of Samothrace Reconstructed Digitally Using Tourists’ Photographs.

Image

If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.

The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.

In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.

Image

The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.

The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3

Image

As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.

Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.

Image

Using Photogrammetry with Archaeological Archives: Must Farm

Image

About a year ago I volunteered at the Cambridge Archaeological Unit for three weeks, during which I had some time to carry out some experiments with Photogrammetry with the help of some of the people there. One of the projects I carried out involved using the photograph archive they had to create 3D models with 123D Catch, to see if it was possible to create models from photographs not taken with this purpose in mind.

Looking through the archive one site in particular caught my eye, as the photographs perfect for this use: Must Farm, Cambridgeshire. The site itself is extremely interesting, and has won a Site of the Year award for the level of preservation. It is mostly a Bronze Age site, and throughout the years it was excavated it revealed a series of intact wooden boats, as well as a living structure which had collapsed into the river, waterlogging the timber frames and all that was within, including weirs and pots with food residue. For more information visit the official website http://www.mustfarm.com/.

Image

The photographs I found were from the 2006 excavation, and they consisted of series of shots from similar angles of same objects. The number of images per feature was around 8, depending on what it was. The most common things photographed were waterlogged wood beams, pottery spreads, sections and weirs.

Generally, the number of images and the fact they were all taken from a very similar angle would mean making a model is impossible. But through different test I have found that there is an exception to this rule when the object is particularly flat. If you are photographing a wall, there is no need to go round it to create a model, all you need is to take the images from the front and then change the angle slightly. The idea is also at the root of Stereophotography, in which two images at slightly different angles give the illusion to our eyes that they are in 3D. Similarly modern 3D films use a similar idea, with incredible results.

Image

Running the images through 123D Catch provided the proof of this theory, as in out of 40 or so potential models, around 70% were extremely good. The models had all the detail of a 3d model made with intentional photographs. Some details could have been a bit better, like some models of timber sticking out of the ground, for which only the front is available as would be expected, and the side of protruding objects which are blurry, but overall the results are amazing for what they are.

Most of the models are now available at http://robbarratt.sketchfab.me/folders/must-farm

Image

If we consider the amount of archaeological photographs that are taken at every site, surely amongst them there is enough to create at least some fantastic models.