Using Iphone Camera for Photogrammetry

I mentioned before I recently received an Iphone 4s, and having been a strong supporter of Windows against Apple, I am slowly being converted over. Apart from the great thing of being able to carry my models around and show fellow archaeologists without risking the life of my laptop, I have started exploring the advantages of having a pretty good camera on me at all times.

By using the 123D Catch app, it is possible to instantly create amazing models wherever you are, but how accurate are 3D models made using the Iphone camera? I don’t have the app itself due to a lack of 3G, but I have been going around site the last week or so and have taken a number of photographs and then processed them once got home.

Once again I experimented with larger and smaller objects and features, comparing the results I got with those done with regular SLR cameras. I can’t actually upload the images due to site restrictions, but I created a model of a toy car as an example. I followed the usual methods of recording, so to not alter the results in any way.

Image

These are some of the points I have found:

Image stitching: Comparing the number of images that stitched in normal models and those done with the Iphone’s camera there is a bit of a difference between the two. Especially with similarly coloured images only some of the images are stitched together. This however happened only on a few occasions and as such doesn’t constitute a major flaw.

Point cloud: The number of points within the models done with the Iphone seem to be equal, if not more than those in the normal photographs. I believe this is because the Iphone seems to adjust the colours and lighting automatically and digitally, which makes the photographs seem more consistent. On the other hand this also seems to have the negative effect of artificially changing the images, thus playing with the contrast and colour balance, which affects the accuracy of the model.

Image

Textures: The textures in the Iphone models seem to be extremely good, probably due to the digital adjustment mentioned above. In this case I wouldn’t say this is a problem, and the results are quite bright and distinct, which is a good thing when analysing the models.

General look: This is the point I have the greatest issues with. The number of keypoints the program finds made me expect extremely crisp models, but they look to me much more murky than they should. The digital altering of the images, and the fact that the size of the images is below the 2 MBs makes the model much less accurate, and the results suffer greatly.

Image

Overall though I am happy with this method. If the models were of extreme importance, I wouldn’t even consider using the Iphone camera, but for simple and less important models it is perfect. The practical nature of being able to capture images in a matter of minutes and have them upload directly to my Dropbox is great, and on more than one occasion I’ve been caught without my camera, so it is a great alternative.

Advertisements

Photogrammetry in Archaeology: Examples

I decided to make a video to show you some of the work I’ve been doing with 123D Catch and archaeological features. There’s not much to say about this one as the video is self explanatory, so just thanks for watching.

Some Issues with Photogrammetry

This is based on some of the work I did for my dissertation. I realised that as it stands it isn’t likely to be published, so I thought I should at least share some of the concepts and ideas that I used for it.

Creating Photogrammetric models for archaeology can be a simple process, but there are some cases in which problems may rise, due to the shape of the objects or the type of surface. We’ve already seen how large features can require certain considerations when photographing, to simplify the work of 123D Catch.

But there are other issues as well:

  • Curved surfaces: If for example we are looking at a pot fragment and we are photographing it from the inside and the outside, the fact the surfaces are concave and convex rather than flat means we may have some difficulties. In general, convex surfaces are easier to manage than concave surfaces, as, provided that the angle of the two spirals of photos is big enough, the pattern of points is such that it provides sufficient information to the program to deal with the surface. If, however, the surface is concave then the same does not apply, and the outcome of the rendering depends on the size of the object. A large surface will generally provide sufficient detail for the program to recognise it and deal accordingly, but a smaller sample often does not have the same results, and the only course of action is to adjust the illumination to create as much contrast on the surface as possible, although this also does not always work. Similarly, if the object has elements which overlap in the photos, these can present some problems, as the program often assumes that they are on the same level. In this case, it may be necessary to increase the number of photos input to help clarify, as well as manually unstitching and stitching all photos that have not been properly placed.

Image

  • Translucent surfaces: These can cause problems with the camera, as they alter the contrast depending on the angle and lighting. In this case, it is important to make sure that the lighting is uniform and the object fixed, hence making it necessary to circle the object rather than rotate it.
  • Uniform texture: If the colour of the object is too similar throughout, thus not providing enough difference and contrast to allow the selection of keypoints, adjusting the light to bring out as much contrast as possible may assist in the creation of the model. One particular case of this is objects that have similar sides, like the image below that looked very similar from different angles. On the other hand, 123D Catch seems to cope quite well with this case, allowing the smallest of details to be picked out from the texture.

Image

There are of course many other cases in which we have problems using 123D Catch, but these described are some of the most common. The fact that these issue exist though doesn’t mean that the software is extremely faulty or useless, as we are dealing with some rare exceptions, while generally models should be produced easily and with none of these issues.

Viewing Photogrammetric Models on iPhone/iPad

Image

Since very recent I used to look at the iPhone and the iPad with a pinch of scepticism, as I believed it to be simply a less powerful laptop, mainly used for games and the occasional note taking. I’ve always been a Windows user, but last month I was given an old iPhone and I’m becoming more and more convinced of the effectiveness of the Apple products, especially with regards to 3D modelling and Photogrammetry.

The first thing I tried out was the 123D Catch app, which I am very pleased with. Unfortunately I don’t have 3G, so the great advantage of being able to create models wherever I am is lost on me. Still, regardless of personal use, it is truly a plus. Also, the camera itself is good enough to get the level of detail necessary.

The one thing though that got me thinking was the possibility of carrying with me my collection of models, so I have something to show when talking to people about Photogrammetry. In the last two months many times I’ve had to bring my laptop on site to show some results, and every time I risked it getting broken. As my phone is much easier to protect I realised that if I could get my models on my phone, it could save me a lot of money for a new laptop.

Therefore I started looking through all the different types of apps available, both free and commercial. Out of all the ones I found, the one I’m most pleased with is Meshlab for iOS, which is derived from the Meshlab I use on PC.

Image

Yesterday I went through the main flaws of the PC version, but the app is actually the best there is. It allows you to open Obj files with textures using mail or Dropbox, by placing them in a .zip archive, and then it views them in a typical Meshlab environment. The texture support is a deal breaker, as it’s what is giving me a lot of problems in other apps. Also, the navigation tools are easy and intuitive, and you can change the lighting with a single tap, highlighting certain areas. Finally, it doesn’t require an internet connection, which is ideal for my iPhone.

The only disadvantages I can see is that is does crash when opening large files, which is rarely a problem, but annoying in some cases, and the fact that the contrast is too high. The shadows it creates makes the models seem less natural than they should be, and there is no way to remove them. Although not really a major issue, it does make the models lose a bit. I’m guessing following updates will make this function better. Finally there is no way to sort files in folders, which could be difficult if you have many different models.

I shall continue investigating apps and see what I can find. By the looks of it there is a lot of potential for 3D modelling and archaeology awaiting me.

Virtual Museums: Combining 3D Modelling, Photogrammetry and Gaming Software

I wrote the post below yesterday night, but since it was written I’ve managed to create at least a part of what is described in the text, which is shown in the video above. Hence keep in mind that the rest of the post may be slightly different from what is in the video.

One of the more popular posts I’ve published seems to be the one about public engagement at Caerau, South Wales, in which I created an online gallery with the clay “Celtic” heads school children made. The main concept that I was analysing in the text was the idea that we could create digital galleries in which to display artefacts,

When I wrote the word gallery I imagined the computer definition of gallery, as in a collection of images (or in this case models) within a single folder. However I have since found this: http://3dstellwerk.com/frontend/index.php?uid=8&gid=18&owner=Galerie+Queen+Anne&title=1965%2C85%C2%B0C

This is an example of what the website http://3dstellwerk.com offers, an opportunity for artists the create a virtual space in which to display their work. It allows users to go “walk” through the gallery and view the 2d artwork as if it were an actual exhibition. Although the navigation may require a little improvement, it is a brilliant idea to make art more accessible to people.

Virtual Museum

This idea however could easily be adapted for archaeology, using Photogrammetry, Making models of a selection of artefacts using 123D Catch, we can then place them within a virtual space created with our 3D software of choice, in order to then animate it using gaming software such as Unity 3D which would allow user interaction. A large scale project could even allow the objects to be clicked in order to display additional information, or create audio to go with each artefact. Video clips could also be incorporated within the virtual space.

Virtual Museum 2

On an even larger scale this could mean we can create online museums available to all and with specific goals in mind. As we are talking of digital copies of objects, it would be possible to group in a single virtual space a number of significant objects without having to physically remove them from their original location.

The only problem that we may encounter with this idea is file size, as each photogrammetric model is relatively small and manageable, yet if we want a decent sized virtual museum we are going to need a large portion of data. Still, even if the technology at present is not quite capable of dealing with the bulk, the rate at which it is improving will allow such ideas to be doable in the near future.

Virtual Museum 3

The Winged Victory of Samothrace: Analysis of the Images

This is a continuation of my blog from yesterday, so I suggest you read that first. One of the things I’ve been working on the past month is creating a photogrammetric model of monuments using nothing but tourists’ photographs. After many attempts my last test, using the Winged Victory of Samothrace as a model, seemed to have sufficiently good results, so much that I decided to analyse the image data to pinpoint what kind of photographs actually stitch together in 123D Catch, and which ones give problems. This way we can actually choose a limited amount of good photographs, rather than a vast number of mixed ones.

Image

In order to understand the patterns behind the stitching mechanism, I created an Excel file in which I included all images with certain details: width and height, if the background is consistent with the majority of the photographs, if the colour is consistent, if the lighting is the same, the file size, and the position of the object. The last one is based on the idea that to make 3d models we need series of shots from 8 positions, each at 45 degrees angle from one another. If we are thinking of it like a compass, with North corresponding to the back of the object, position 1 is South, 2 is SW, 3 is W, and so on up to 8 is SE.

The first thing I noticed, which ties in with what said yesterday, is the lack of photographs in positions 4 and 6 (NW and NE), which of course meant that all the images from the back (position 5) also had trouble stitching. Therefore the first problem for the model is the need of enough images from all angles, without which it is inevitable to have missing parts. This is made harder by the fact that these are typically positions that are not photographed.

Having concluded that images from position 4-5-6 had this reason for not stitching, I removed them from the list so the data would be more accurate for the others.

I then averaged the height, width and file size of the stitched images and those of the unstitched images and then compared them. The former had an average height of 1205.31, a width of 906.57 and a file size of 526.26, while the latter had a height of 929.07, a width of 668.57 and a file size of 452.57. The differences here are enough to suggest that large good quality images have a higher chance of being stitched. This may seem obvious, but actually some images that have not stitched are larger and higher quality than some which have, so this can’t be the only factor. Also, the difference isn’t large enough to suggest it is even a key factor.

Image

Image

The next step was analysing the percentage of stitched images that had abnormal background, colour and lighting to the unstitched ones, which had even more limited results. In the former, the background was not constant in 15% of the images, the colour in 63% and the lighting in 47%, while in the latter background was 0%, colour was 50% and lighting 50%. Somehow these results show that the photographs have a higher chance of being included if they are different from the rest, which goes against all the knowledge we have up to now of 123D Catch. I therefore suspect that the program allows a good tolerance when it comes to these elements, and that I may have been to harsh in defining the differences in these elements.

Having concluded little with this methodology I decided to look at each individual unstitched image to see what elements could be responsible. This proved to be much more successful. I managed to outline three categories of unstitched images which explain every single one of the photographs, without any overlap with the stitched ones:

  • Distant shots: Some photographs were taken at a further distance from the statue than others, and while a certain degree of variation is acceptable, in these it was too extreme.

Image

  • Excessive shadows: Although as we’ve seen before the lighting didn’t appear to be a factor, some of the images had an extreme contrast, with very unnatural light. They were practically at the edge of the scale, and while some variation is acceptable, these were well beyond that.

Image

  • Background: this is an interesting case in which the background is not different  rom the rest, but has a very similar colour to the rest of the object. Because of the similarities it is difficult for the program to recognise any edges, which makes it impossible to be stitched correctly.Image

Therefore creating 3d models from tourists’ photographs is entirely possible, as long as we have sufficient angles, photographs from a similar distance, without harsh lighting and with a contrasting background.

The Winged Victory of Samothrace Reconstructed Digitally Using Tourists’ Photographs.

Image

If you’ve been following my latest attempts to recreate famous monuments through Photogrammetry, using nothing but tourists’ photographs finally I have something to show for your patience. Before you get your hopes up, it is still not perfect, but it’s a step forward.

The idea behind this is that 123D Catch uses photographs to create 3d models, and while generally this entails that the user has to take their own photographs, it doesn’t necessarily have to be so. The internet is full of images, and while most of them seem to be of cats, there are many images of famous monuments or of archaeological finds. Therefore there has to be a way to utilise all these photographs found online to digitally recreate the monument in question. The problem is, although theoretically there should be no issue, there are still a great number of elements that affect the final mesh, including lighting, background and editing. While the images the user takes in a short span of time remain consistent due to minimal changes taking place, a photograph taken in 2012 is different from one taken in 2013 by a different camera, making it hard for the program to recognise similarities. As an addition to this, tourists take photographs without this process in mind, so often monuments are only photographed from limited angles, making it hard to achieve 360 degree models.

In order to better understand the issue I started working with a series of monuments, including the Sphinx, Mount Rushmore (see previous articles), Stonehenge and the Statue of Liberty. These however are extremely large monuments, which makes it somehow more difficult for the program to work (although theoretically all that should happen is a loss in detail, without affecting the stitching of the images). Therefore I decided to apply my usual tactic of working from the small to the large, choosing a much smaller object which would still prove the idea. In this case I chose the Winged Victory of Samothrace.

Image

The reasoning behind the choice is that unlike other statues, there is more of a chance of images from the sides being taken, due to the shape of it. It is also on a pedestal, which means background should remain consistent in between shots. It also allows good contrast because the shadows appear amplified by the colour of the stone. I was however aware that the back of it would probably not work due to the lack of joining images, but figured making the front and sides itself would be sufficient progress.

The results can be seen here: https://sketchfab.com/show/604f938466ad41b8b9299ee692c5d9a3

Image

As you can see, the front and most of the sides have come out with enough detail to call it a success, also because 90% of the images taken from these angles were stitched together. The back as predicted didn’t appear, and there are some expected issues with the top as well. What however is even more surprising is that some of the images taken had monochromatic backgrounds, very different from the bulk. These images still stitched in, suggesting that background is not the key factor with these models. The lighting is relatively consistent, so it could be this is the main factor. As for image size and resolution there doesn’t seem to be much of a pattern.

Overall I was very pleased with the results, and hopefully it’ll lead to a full 360 degree model as soon as I pinpoint an object with enough back and side images. Still, this does show that it is possible to create models from tourists’ photographs, which would be great to reconstruct those objects or monuments that have unfortunately been destroyed.

Image

Glastonbury Ware Pot – Ham Hill

This is one of the first attempts I made with Photogrammetry, and probably one of the ones I am most happy with. It is a beautiful pot found during the 2011 excavation, and that was glued together to show how it would have been prior to destruction. I made the model using around 60 images with 123D Catch, and good natural lighting that brought out the contrasts well, especially with regards to the decoration.
The thing I am extremely happy with is that I was able to create both sides, something which I was struggling to do before, and which was probably helped with the large number of images.
The animation itself was made using the tool in 123D Catch, which is ideal to display the model, although it is hard to create a non-wavy video, as this one shows. Still, in absence of suitable programs, or updated browsers for Sketchfab, it is an ideal tool, as it can be uploaded to youtube and shared with anyone interested.
As an addition though the model can also be viewed at the following link: https://sketchfab.com/show/8ABDov7xS8kV8mfbGMuQkFOmjE3

DSC_0286

Using Photogrammetry with Archaeological Archives: Must Farm

Image

About a year ago I volunteered at the Cambridge Archaeological Unit for three weeks, during which I had some time to carry out some experiments with Photogrammetry with the help of some of the people there. One of the projects I carried out involved using the photograph archive they had to create 3D models with 123D Catch, to see if it was possible to create models from photographs not taken with this purpose in mind.

Looking through the archive one site in particular caught my eye, as the photographs perfect for this use: Must Farm, Cambridgeshire. The site itself is extremely interesting, and has won a Site of the Year award for the level of preservation. It is mostly a Bronze Age site, and throughout the years it was excavated it revealed a series of intact wooden boats, as well as a living structure which had collapsed into the river, waterlogging the timber frames and all that was within, including weirs and pots with food residue. For more information visit the official website http://www.mustfarm.com/.

Image

The photographs I found were from the 2006 excavation, and they consisted of series of shots from similar angles of same objects. The number of images per feature was around 8, depending on what it was. The most common things photographed were waterlogged wood beams, pottery spreads, sections and weirs.

Generally, the number of images and the fact they were all taken from a very similar angle would mean making a model is impossible. But through different test I have found that there is an exception to this rule when the object is particularly flat. If you are photographing a wall, there is no need to go round it to create a model, all you need is to take the images from the front and then change the angle slightly. The idea is also at the root of Stereophotography, in which two images at slightly different angles give the illusion to our eyes that they are in 3D. Similarly modern 3D films use a similar idea, with incredible results.

Image

Running the images through 123D Catch provided the proof of this theory, as in out of 40 or so potential models, around 70% were extremely good. The models had all the detail of a 3d model made with intentional photographs. Some details could have been a bit better, like some models of timber sticking out of the ground, for which only the front is available as would be expected, and the side of protruding objects which are blurry, but overall the results are amazing for what they are.

Most of the models are now available at http://robbarratt.sketchfab.me/folders/must-farm

Image

If we consider the amount of archaeological photographs that are taken at every site, surely amongst them there is enough to create at least some fantastic models.

Reconstructing Mount Rushmore from Tourists’ Photographs

Image

One of my first posts here was about the creation of 3D photorealistic models using tourists’ photographs, the main aim of which is the possibility of recreating monuments that were around since the invention of photography and that have since been destroyed or damaged. Also, it is a way of testing the size limits of the technology, which previous work has pushed as far as 10 m by 10 m with relative accuracy using aerial photographs. In the other post I attempted to create a model of the Sphinx, using images I had found online with a simple Google search. I had used more than 200 images, with some interesting results.

Although far from complete I managed to create slightly less than half of the Sphinx, which however allowed me to draw some conclusions. The main one was that the technology is indeed capable of making large scale models from tourists’ photographs, the only issue is understanding what combination of images allows this. The images that had been stitched together (only 10% of the total) didn’t seem to follow a pattern, and a series of limitations produced no improvement.

Hence I decided to move on to another model to understand what combination of photographs is needed. I therefore chose around 50 photographs of the Statue of Liberty from all angles and attempted to run them, with only 5 creating a somewhat unsuccessful mesh. Given this small number it was hard to draw any conclusions, other the suspicion that more photos can improve the quality of the model. One of the main issues I had though was actually finding photographs of all the statue. While many images of the front were available, the sides and back were practically un-photographed, and the lower part of the body was entirely ignored in some angles.

Given this I moved onto another monument, Mount Rushmore. The reason for choosing this was that given it is only 180 degrees it requires less angles to photograph. Also, the aforementioned work with aerial photography seemed to suggest that the combination of the depth of perspective of Mount Rushmore and the fact parts of it don’t cover other areas, all that would be needed is a slight variation from a single position (like in stereophotography). I selected 40 images from google search, most of which appear to be taken by the same viewing point.

Image

Similarly with the Sphinx, the results are not amazing, but they do narrow down the process. The mesh looks great from around the theoretical viewing point, but the move you rotate it the more it becomes warped and distorted, especially the faces further away. The lack of multiple images from other angles are clearly taking their toll, as the program is not managing to create the sides which become stretched. Anything that can be seen from the viewing point appears to be accurate, but the rest is much approximated. Therefore the idea of a single angle with variations has to be discarded.

Image

The good think though is that only 7 images were unstitched, while 4 were stitched incorrectly. The only reason I can see of this is the great similarity of the stitched images, which have relatively consistent lighting and colours. This therefore may be the key to the entire process, which explains why the Sphinx was harder to achieve, due to a great difference in the photographs’ colour and light. The images that were placed wrongly were taken from a greater distance than the others, which may suggest a relatively consistent distance may be another factor, and the main reason for the failing of the Statue of Liberty.

Finally the fact this was done with only 40 images, over the 200 of the Sphinx shows that it is not the number that affects the model but it’s the actual image quality and consistency.

The results of course are not acceptable for either of the aims of this theory, and even though I am slowly narrowing down the ideas to make this work it still requires many more tests. The next step is the creation of an Access Database to try and narrow down the key elements that may affect the results.

Image