Emphasising Inscriptions Using 123D Catch

One of the most interesting projects I have been working on over the past few months has been trying to emphasise inscriptions on artefacts using Photogrammetry. The theory is that if the model is accurate enough it should be possible for a program to determine the different bumps in a surface and exaggerate them in order to make them easier to identify.

Image

My first attempt was with a lead stopper (of which I have posted before), which appeared to have some form of writing inscribed on its surface. Having made a model using 123D Catch, I run it through Meshlab and tested many different filters to see if any of them gave me different results. One in particular seemed to do exactly what I wanted, changing the colour of the texture based on the form of the surface. This is the Colourise curvature (APSS) filter, with a very high MLS (I went for 50000) and using Approximate mean as curvature type. he results were some form of inscription, clearer than the photographs, but still quite murky.

Image

In addition to this scarce results, other tests with different artefacts pointed towards a lucky coincidence rather than an actual method.

One of the main issues I was also having was that Meshlab kept on crashing when using certain filters or shaders, which meant I could only test out only some of the possible options. So when I made a model of a series of modern graffiti at Cardiff Castle the results were also deluding.

The other day though I so happened to update my Meshlab program, as well as bumping into this interesting article, which gave me the exact solution I was looking for http://www.dayofarchaeology.com/enhancing-worn-inscriptions-and-the-day-of-archaeology-2012/

One of the shaders that I had tried using but that crashed every time was the Radiance Scaling tool, which does exactly what I was aiming to do. I run the graffiti and the results are amazing. The original model stripped of textures is a very unclear blurry surface, but after Radiance Scaling the graffiti come to life. The exact mechanics behind this tool, and hence the accuracy, are currently something I am working on, but at least visually the results are there.

ImageImage

If this proves to be an effective method, it would suggest that Photogrammetry is not simply technology for technology’s sake, but it can actually be used to interpret archaeology in ways not available before.

Edit: I’d also like to draw attention to the fact that this model was made entirely using my iPhone.

Advertisements

VisualSFM: Pros and Cons

Image

I’ve been working with Photogrammetry for some years now, and although I use a great variety of programs to edit the 3D models, from Mehslab to Blender, when it comes to the actual creation of the models I have only ever used 123D Catch. This is partially due to the fact I now feel very comfortable using this program, having learnt what it requires and how to achieve the perfect model, but also due to great quality and simplicity that 123D Catch offers.

Only recently have I ventured out of my comfort zone to explore other Photogrammetry programs, to see if any can compare or even replace 123D Catch.

I had a spell with Agisoft Photoscan, with interesting but limited results, so now I have turned my attention to another piece of freeware called VisualSFM. At present I have only tried out a few things, so my opinions will certainly change in the future, but here are the pros and cons I have found up to now (compared to 123D Catch):

PROS

  • It’s multiplatform: only a few weeks ago another user commented on one of my posts asking me for alternatives to 123D Catch as he couldn’t use it on his Mac. VisualSFM is not restricted to Windows/Ipad App, but supports all major systems, making it a great tool for all those that don’t have a PC but still want to try out Photogrammetry.
  • It allows control of the process: one of the good things of 123D Catch is that it is easy to use, but this is to the expense of more expert users. Uploading images and getting results with a single click is great, but it’s difficult to understand what is actually happening in between. VisualSFM instead guides you through all the steps, so if anything goes wrong you can pinpoint the problem, or you can understand which photos are better for research purposes.

Image

  • It works offline: many times I have found myself slowed down by a weak internet connection. With VisualSFM this is not necessary, which also means I can create models on site without the need of Wifi. Makes the whole process more efficient and means I spend less of my free time working on models.
  • The camera placement is more accurate: this one is still in testing, but up to now I have had no problems with cameras being placed in locations they are not meant to be in. With 123D Catch often a single photo stitches to the wrong place and causes the entire model to malfunction. With VisualSFM this doesn’t appear to be the case.

CONS

  • Less points: I compared a few models made with the same photographs by the two programs. The results suggest that VIsualSFM points may be placed more accurately, there is far less of them, making the overall model itself less accurate. In the pictures top is VisualSFM, bottom is 123D Catch.

Image

Image

  • Still haven’t finished a model: once the point cloud is created, VisualSFM has finished its job and it becomes Meshlab’s responsibility to actually recreate the surfaces and reattach the textures. Up to now I have not managed to do this. I’ve talked about Meshlab before, but in short it crashes and malfunctions all the time. It took me days to recreate the surface the first time as the program refused to do it, and attaching the texture is still something I can’t seem to manage.
  • Needs user control: with 123D Catch you can leave the program running and return to a finished product. With VisualSFM you have to constantly interact with the program, meaning you can’t multitask.

Overall, it’s got potential. It’s not going to replace 123D Catch any time soon, but I am still going to try different things out to see how it reacts and find any advantages. For a full description of how to create models using VisualSFM visit here: http://www.academia.edu/3649828/Generating_a_Photogrammetric_model_using_VisualSFM_and_post-processing_with_Meshlab

Image

Photogrammetric Model Made With Iphone 4s

Sheep 1

I’ve experimented before with using my Iphone to create Photogrammetric models (not through the app, just taking the photos and running it through the Windows version of 123D Catch), with interesting but not perfect results. The other day however I found myself with a nice complete in situ sheep skeleton and no camera, so I took the opportunity to test the technology once again.

I took 49 photos with a very good uniform shade, going round the skeleton at first and then concentrating on tricky parts, like the head or the ribs. I then run it through 123D Catch and found that almost all of them had been stitched. I think the lighting really did the trick, as it created a really nice contrast between the bones and the ground, The photos were taken just as the sun had set, so it was still very light, but with no glare.

sheep 5 sheep 4

The skeleton itself looks extremely good compared to some of my earlier tests. It can be viewed here in rotatable 3D: https://sketchfab.com/show/b0ef1638d4714fcdab59c040cdb46923
I particularly like the relatively sharp edges that I really couldn’t achieve with the other models, and by looking at the cloud point I found it to be quite accurate regardless of textures. In addition to that it’s coped excellently with the rib that pokes out of the ground and the pelvis, both of which I was absolutely sure it would have a problem with. Overall I’d say the model was nearly as good as some of the models I have done with a standard camera, and I think the potential is definitely there.
The only issue I have with using the Iphone camera is that it’s still an unreliable method. I tried replicating the results today as it had been cleaned better, but the new model is more blurry, again probably due to slightly less ideal lighting conditions. Therefore I would still use my camera as much as possible, and save the Iphone for those situations in which I find myself unprepared.

sheep 2

A Theoretical Approach to Photorealistic 3D Video: the Future of Film and Gaming?

Generally I am not a big fan of theoretical issues, especially when it comes to something as practical and visual as 3D modelling. This however is something I cannot really experiment with practically (or within an acceptable time frame), so for now it has to remain in the grounds of theory. It is also not strictly archaeological, although I’m sure some applications must come from it.

What I mean with Photorealistic 3D Video is taking the still photogrammetric models you’ve seen before and applying movement to it. This can be done by using either stop motion or a combination of cameras to acquire the original footage, then modelling individual frames and putting the frames in a sequence.

Image

The acquisition of footage depends on what you are trying to achieve. If the object is still itself and it’s going to be moved frame by frame (like traditional stop motion animation) then a single camera is required. Instead of taking a single photo of the scene, like in standard animation, a series of images would be taken to make a Photogrammetric model, as explained previously on this blog. If instead the object is already in motion, as for example an individual acting out a scene, the trick is to use a large number of video-cameras that surround the object, using the same positions described before (a good example of this can be seen here http://www.webpronews.com/get-a-3d-print-of-yourself-in-texas-2013-08). With all the cameras then each frame is isolated as if photographs of the scene were taken at the same time from all angles. With either method, the results should be a large number of frames (24 for second or more depending on format), each of which is made up by 20 to 30 images.

The second step is the creation of the models. This can be done using 123D Catch or any other Photogrammetric software. Each series of images constituting a frame are hence transformed into a single rotatable 3D model.

Then all the frame models are run through other software, at present I am leaning towards gaming software, but video editing software or animation software may be more suitable, so possible options are Unity 3D, Maya or After Effects. Some alignment will have to take place, but by superimposing the models on top of each other and making it so that a single one is visible at a time should create an animation effect, again like stop motion. This is the part I am most unsure about, due to it being quite a demanding task, which may be far too complex for computers at this time. Still, in the future it should be more than possible.

Image

At this stage the result is a series of still models that run through giving the illusion of movement. This can then be combined with technology that is now appearing on the market. In particular it could be used with the Oculus Rift that is soon going to revolutionise how 3D gaming works. By tracking the position of the user, it would be possible to not only see the Photorealistic Video, but also move around it as if it were real. By combining more than one model an entire scene could be created, meaning 3D films in which the user is actually present within the film can become a possibility, as well as uber-realistic videogames.

Image

Textures in Photogrammetry: How They Can Deceive Us

One of the advantages I find in Photogrammetry is that unlike other methods such as laser scanning and regular 3D recording, the results are photorealistic. The models seem natural and precise due to the textures that are associated with the points, and aesthetically it is much more pleasing. In addition to this it is amazing for archaeological recording, as all colours and shades are preserved as they were originally.

However the more I experimented with this great technology, the more I realised that as useful as this tool is, the textures sometime give us a wrong idea of what is going on. Generally this causes no problems at all, but in some situations I have found myself trying to get things to work and failing due to a false perception I had of the model.

A good example of this are these graffiti from Cardiff castle.

Image

The model itself is not great. It was made with my Iphone camera, and I didn’t even use the maximum quality on 123D Catch, however it does help me prove the point.

I spent quite a lot of time trying to test a theory of mine, which I’ve written about before. Theoretically, using a filter or renderer on a photogrammetric model, it should be possible to emphasise inscriptions in order to make them clearer. I was originally trying the APSS filter, but I recently read articles suggesting the use of Radiance Scaling in Meshlab (I’m still experimenting with this, but results seem positive for now). Therefore I decided to test out a number of filters on the graffiti model, as the ridges of the letters appeared to have been handled very well by the program. Even when rotating the textures caused me the illusion that quite a few points had been found around each letter, points that I could manipulate to my advantage.

Having played around for some time with no real results, I decided to try removing the textures to manipulate the shadow, but having done that I noticed that the beautiful ridges that had appeared previously, disappeared when the textures were removed, like so:

ImageImage

Looking at it even better I noticed that the points that had been found were basically evenly distributed, rather than surrounding the letters as I first thought. As a result the reason the filters were not working was because there was nothing to work on.

ImageImage

So even if the textures were providing a much needed aesthetic relief, and helping with the recording, for more specific actions they only got in the way, creating illusions.

This however does not in any way limit the technology. The case shown was that of an extreme case, in which the circumstances caused a lack of points. Most models have a much larger number of points and a greater accuracy, which makes it suited for most situations. However when pushing the limits of the technology it is important to remember that in the end the model is just a collection of points, and that these are where the potential is.

First Photogrammetry Article Published

New Photogrammetry Article

article

I’m very glad to present you with my first (but not last) published article on the topic of Photogrammetry in Archaeology! The December edition of The Post Hole, that has recently been released, features a paper on “The use of Photogrammetric models for the recording of archaeological features”, which I wrote during the summer, and which I’m sure you will find of some interest.

It deals specifically with archaeoloogical features on site, and it looks at accuracy, methodology and uses, especially when it comes to recording. The aim of it is to show that far from being technology for technology’s sake, Photogrammetry can contribute greatly to our understanding of an archaeological site, as well as reinforce and improve traditional methods of recording such as section drawing and plans.

The article is based on a few sites I worked on, and that have featured on this website before, such as Ham Hill and Caerau.

This is however just scratching the surface of a technology that is now appearing more and more frequently in publications, and that will eventually become a fundamental part of archaeological recording.

Roman Villa Reconstructed In 3D

Based on the plan of an actual Roman Villa, this is a fly through of the model. It’s a way to explore this living area and get a more authentic feel of what it would have been like to actually live in the Roman times.
The model was made using Google Sketchup, and the final project sees furniture and details added in to make it even more realistic. This however is the building at present, showing how archaeology can be brought to life using 3D modelling software.
A more detailed account on how this model was made can be found previously on this website.

Ham Hill Iron Age Skeletons Turn Digital

Image

Three of the skeletons found at the site of Ham Hill, Somerset during the 2013 excavation are now available to view online at the following links:

https://sketchfab.com/show/70d864c4736435710bc65b6f21d81c03

https://sketchfab.com/show/821565c7ce0b98e1b7764c73a9f07492

https://sketchfab.com/show/fa694aff0fb5949e2f396a5fb2da37b0

The skeletons were discovered during this year’s excavation carried out by the Cambridge Archaeological Unit and Cardiff University, at the site of an important Iron Age hill fort. They are only some of the many human remains found, some of which were carefully placed within a number of pits, while others were located symbolically within an enclosure ditch, often with body parts missing.

Image

The models themselves were made using Photogrammetry and specifically 123D Catch, which required very little time for quite good quality. The aim of this was to preserve a record of the skeletons in situ for further interpretation once they had been removed from the location they were discovered in.

Given the complexity of the subject, the results seem to be very promising. Time was a big factor when making the models, as they had to be removed before damage occurred. Some of them were also in tight spots, which made it hard to access some of the standard angles, but overall this was counterbalanced by taking a larger number of photographs than normally (around 30 per skeleton). The lighting conditions also proved to be ideal, as there was an overcast sky, but also sufficient sunlight coming through to really emphasise the surfaces.

Image

For further information on the skeletons I suggest reading the article at http://www.bbc.co.uk/news/uk-england-somerset-23928612

Roman Villa Reconstruction Preview

Image

I have talked endlessly before on this blog about the use of Google Sketchup in the archaeological world, so pardon yet another example on the topic. I recently started recreating a typical Roman Villa using plans from a number of sites and any source of information I could find. The final plan is to not only create the structure itself, but also include many more details, such as furniture, statues, etc.

Having completed the main structure I thought I would share the results as they stand, as a sort of preview to the completed work, and explain some of the aspects of making the model. In the next couple of days I’ll also post a fly-through video which is currently rendering, to give an even better impression.

Image

This model was an interesting one to make, as it was more complex in some aspects than the ones I did before, and it combined opened and closed spaces, with equal importance given to both. The plans I found were very good for the ground floor, which is pretty accurate, but for the top floor there is a definite lack of information, mainly due to the lack of archaeological evidence. Therefore I had to resort to sketch reconstructions which are based on personal interpretation, which I am not usually fond of. Similarly the roof and the inside of the rooms is mostly conjecture on my part, based however on ideas found in texts. Overall then the model is much more interpretive than for example the Parthenon model I made, but at the same time it is more useful as the Parthenon is actually standing, while the villa is not.

Image

Something I noticed from making this model is the efficiency with which Sketchup deals with lighting. In the past I wasn’t a big fan of the lighting conditions as I found that inside spaces were too dark, and outside spaces were too bright, however in this case I find that this is in no way an issue, possibly because we have both inside and outside. The rooms are still a bit dark, but with the addition of external windows that I’m adding in the next phase, they should be quite faithful to reality, while the internal courtyards are bright, but not unnaturally so. As a whole the results are quite satisfying, and when objects are placed within the model they will also look realistic due to this.

ImageAlso , rounded edges tool is still a favourite of mine, but I now use it less frequently. In large models some walls look more realistic with rounded edges, but not everything does. Door frames for example look equally good without, and given that it effectively adds many more lines to the model, there is really no need to round them off. For walls, I found that adding a slight slope at the bottom really makes it less blocky and much nicer to the eye. On a more practical note, creating components is still the greatest tip I can give with Sketchup. I found that making each floor and roof a separate entity made it much easier to edit, as you could hide upper floor when having to edit the lower one, and vice versa.

Image

As mentioned before, as soon as the animation finishes rendering I shall post a new update. I realise that recently I have been posting less and less, but I assure you it is only for practical reasons. I am currently involved in the writing of an archaeological based radio show, which is taking up a lot of my spare time, as well as working on a number of sites. Also these models do take their time to be made, so I’d rather wait a bit and publish something good rather than many very random posts. Finally a few of the projects I have been working on have the disadvantage that I can’t actually publish any of the results, which means there are a few things that I am doing that I can’t write about specifically. Therefore I apologise if sometimes it takes a bit longer to post something new.

Image

Using Iphone Camera for Photogrammetry

I mentioned before I recently received an Iphone 4s, and having been a strong supporter of Windows against Apple, I am slowly being converted over. Apart from the great thing of being able to carry my models around and show fellow archaeologists without risking the life of my laptop, I have started exploring the advantages of having a pretty good camera on me at all times.

By using the 123D Catch app, it is possible to instantly create amazing models wherever you are, but how accurate are 3D models made using the Iphone camera? I don’t have the app itself due to a lack of 3G, but I have been going around site the last week or so and have taken a number of photographs and then processed them once got home.

Once again I experimented with larger and smaller objects and features, comparing the results I got with those done with regular SLR cameras. I can’t actually upload the images due to site restrictions, but I created a model of a toy car as an example. I followed the usual methods of recording, so to not alter the results in any way.

Image

These are some of the points I have found:

Image stitching: Comparing the number of images that stitched in normal models and those done with the Iphone’s camera there is a bit of a difference between the two. Especially with similarly coloured images only some of the images are stitched together. This however happened only on a few occasions and as such doesn’t constitute a major flaw.

Point cloud: The number of points within the models done with the Iphone seem to be equal, if not more than those in the normal photographs. I believe this is because the Iphone seems to adjust the colours and lighting automatically and digitally, which makes the photographs seem more consistent. On the other hand this also seems to have the negative effect of artificially changing the images, thus playing with the contrast and colour balance, which affects the accuracy of the model.

Image

Textures: The textures in the Iphone models seem to be extremely good, probably due to the digital adjustment mentioned above. In this case I wouldn’t say this is a problem, and the results are quite bright and distinct, which is a good thing when analysing the models.

General look: This is the point I have the greatest issues with. The number of keypoints the program finds made me expect extremely crisp models, but they look to me much more murky than they should. The digital altering of the images, and the fact that the size of the images is below the 2 MBs makes the model much less accurate, and the results suffer greatly.

Image

Overall though I am happy with this method. If the models were of extreme importance, I wouldn’t even consider using the Iphone camera, but for simple and less important models it is perfect. The practical nature of being able to capture images in a matter of minutes and have them upload directly to my Dropbox is great, and on more than one occasion I’ve been caught without my camera, so it is a great alternative.