Photogrammetric Model Made With Iphone 4s

Sheep 1

I’ve experimented before with using my Iphone to create Photogrammetric models (not through the app, just taking the photos and running it through the Windows version of 123D Catch), with interesting but not perfect results. The other day however I found myself with a nice complete in situ sheep skeleton and no camera, so I took the opportunity to test the technology once again.

I took 49 photos with a very good uniform shade, going round the skeleton at first and then concentrating on tricky parts, like the head or the ribs. I then run it through 123D Catch and found that almost all of them had been stitched. I think the lighting really did the trick, as it created a really nice contrast between the bones and the ground, The photos were taken just as the sun had set, so it was still very light, but with no glare.

sheep 5 sheep 4

The skeleton itself looks extremely good compared to some of my earlier tests. It can be viewed here in rotatable 3D: https://sketchfab.com/show/b0ef1638d4714fcdab59c040cdb46923
I particularly like the relatively sharp edges that I really couldn’t achieve with the other models, and by looking at the cloud point I found it to be quite accurate regardless of textures. In addition to that it’s coped excellently with the rib that pokes out of the ground and the pelvis, both of which I was absolutely sure it would have a problem with. Overall I’d say the model was nearly as good as some of the models I have done with a standard camera, and I think the potential is definitely there.
The only issue I have with using the Iphone camera is that it’s still an unreliable method. I tried replicating the results today as it had been cleaned better, but the new model is more blurry, again probably due to slightly less ideal lighting conditions. Therefore I would still use my camera as much as possible, and save the Iphone for those situations in which I find myself unprepared.

sheep 2

Using Iphone Camera for Photogrammetry

I mentioned before I recently received an Iphone 4s, and having been a strong supporter of Windows against Apple, I am slowly being converted over. Apart from the great thing of being able to carry my models around and show fellow archaeologists without risking the life of my laptop, I have started exploring the advantages of having a pretty good camera on me at all times.

By using the 123D Catch app, it is possible to instantly create amazing models wherever you are, but how accurate are 3D models made using the Iphone camera? I don’t have the app itself due to a lack of 3G, but I have been going around site the last week or so and have taken a number of photographs and then processed them once got home.

Once again I experimented with larger and smaller objects and features, comparing the results I got with those done with regular SLR cameras. I can’t actually upload the images due to site restrictions, but I created a model of a toy car as an example. I followed the usual methods of recording, so to not alter the results in any way.

Image

These are some of the points I have found:

Image stitching: Comparing the number of images that stitched in normal models and those done with the Iphone’s camera there is a bit of a difference between the two. Especially with similarly coloured images only some of the images are stitched together. This however happened only on a few occasions and as such doesn’t constitute a major flaw.

Point cloud: The number of points within the models done with the Iphone seem to be equal, if not more than those in the normal photographs. I believe this is because the Iphone seems to adjust the colours and lighting automatically and digitally, which makes the photographs seem more consistent. On the other hand this also seems to have the negative effect of artificially changing the images, thus playing with the contrast and colour balance, which affects the accuracy of the model.

Image

Textures: The textures in the Iphone models seem to be extremely good, probably due to the digital adjustment mentioned above. In this case I wouldn’t say this is a problem, and the results are quite bright and distinct, which is a good thing when analysing the models.

General look: This is the point I have the greatest issues with. The number of keypoints the program finds made me expect extremely crisp models, but they look to me much more murky than they should. The digital altering of the images, and the fact that the size of the images is below the 2 MBs makes the model much less accurate, and the results suffer greatly.

Image

Overall though I am happy with this method. If the models were of extreme importance, I wouldn’t even consider using the Iphone camera, but for simple and less important models it is perfect. The practical nature of being able to capture images in a matter of minutes and have them upload directly to my Dropbox is great, and on more than one occasion I’ve been caught without my camera, so it is a great alternative.

Viewing Photogrammetric Models on iPhone/iPad

Image

Since very recent I used to look at the iPhone and the iPad with a pinch of scepticism, as I believed it to be simply a less powerful laptop, mainly used for games and the occasional note taking. I’ve always been a Windows user, but last month I was given an old iPhone and I’m becoming more and more convinced of the effectiveness of the Apple products, especially with regards to 3D modelling and Photogrammetry.

The first thing I tried out was the 123D Catch app, which I am very pleased with. Unfortunately I don’t have 3G, so the great advantage of being able to create models wherever I am is lost on me. Still, regardless of personal use, it is truly a plus. Also, the camera itself is good enough to get the level of detail necessary.

The one thing though that got me thinking was the possibility of carrying with me my collection of models, so I have something to show when talking to people about Photogrammetry. In the last two months many times I’ve had to bring my laptop on site to show some results, and every time I risked it getting broken. As my phone is much easier to protect I realised that if I could get my models on my phone, it could save me a lot of money for a new laptop.

Therefore I started looking through all the different types of apps available, both free and commercial. Out of all the ones I found, the one I’m most pleased with is Meshlab for iOS, which is derived from the Meshlab I use on PC.

Image

Yesterday I went through the main flaws of the PC version, but the app is actually the best there is. It allows you to open Obj files with textures using mail or Dropbox, by placing them in a .zip archive, and then it views them in a typical Meshlab environment. The texture support is a deal breaker, as it’s what is giving me a lot of problems in other apps. Also, the navigation tools are easy and intuitive, and you can change the lighting with a single tap, highlighting certain areas. Finally, it doesn’t require an internet connection, which is ideal for my iPhone.

The only disadvantages I can see is that is does crash when opening large files, which is rarely a problem, but annoying in some cases, and the fact that the contrast is too high. The shadows it creates makes the models seem less natural than they should be, and there is no way to remove them. Although not really a major issue, it does make the models lose a bit. I’m guessing following updates will make this function better. Finally there is no way to sort files in folders, which could be difficult if you have many different models.

I shall continue investigating apps and see what I can find. By the looks of it there is a lot of potential for 3D modelling and archaeology awaiting me.