A few posts ago I talked about metadata and paradata. On this topic, just a few days ago I came across an interesting article by Dave Sims (1997), with references to Harrison Eiteljorg’s work. It was published in IEEE Computer Graphics and Applications, and it provides interesting insight into the history of 3D Reconstruction in Archaeology by providing early examples of modelling. More importantly, however, it offers an early critique on reconstruction techniques, which is still applicable to this day.
One of the major problems of Visualisation (as well as other digital methodologies) is that it is not integrated into traditional archaeological skills. Computational archaeologists are pushing the limits of the tools at their disposal, demonstrating the capabilities of the software and showing how these can be used to record, analyse and present archaeological sites and artifacts. But there seems to be a communication problem between traditional and computational archaeologists, which stems from a deeper issue than simply a lack of understanding. Although the wider archaeological community has struggled with accepting new digital technologies, it is also due to the way these are achieved (Hafer and Kirkpatrick 2009, Barnes 2010).
In order to be accepted by a wider community, a specific methodology must adhere to important rules. First and foremost, research must be replicable (Hafer and Kirkpatrick 2009). Often publications relating to digital technologies omit the process that produced the results, thus creating ‘black boxes’ (Ducke 2012). There is nothing worse than a project that can’t be assessed and which offers nothing to future research. As mentioned before, Marwick points out that:
“This ability to reproduce the results of other researchers is a core tenet of scientific method, and when reproductions are successful, our field advances (Marwick 2016, pp.1).”
Because the current format for publishing doesn’t require (or even account for) digital tools, it is easy to get away with bad research. There is a loss of accountability. With 3D reconstruction the problem is increased by the facade of truth they present. A model seems like the absolute interpretation to the inexperienced, yet the process is highly subjective (Guttierez et al. 2006, Dell’Unto et al. 2013, Sylaiou et al. 2009).
This is the framework within which Sims’ article should be read. And even though the article is written in a time when 3D reconstruction was still in its infancy, it shows a solid understanding of the core issues. More importantly, it provides an answer to our questions, in the form of a set of rules taken from Harrison Eiteljorg’s website (which unfortunately seems to be a dead link now). The rules are specific to 3D modelling, but they can be applied to other areas of Visualisation as well. The six rules are:
1. A good model can teach and entertain, but a poor model can
only entertain. If it teaches, it misleads.
2. CG models that are part of scholarship research must meet the
same scholarship demands as other work in a field.
3. A less accurate model used at small scale must do no more harm
than a good sketch or hand-rendering.
4. Scholarly models cost more to create; sometimes that cost isn’t
justified.
5. Since it doesn’t make sense to make a good and poor model of
the same thing, users should be prepared to share models, to
avoid duplication.
6. No model that misleads should be tolerated. Poor models with
limited usage can be tolerated, but more flexible models, if they
are inaccurate, can do more damage.
The key element I find in this text is the comparison between traditional and computational methodologies. It puts them on the same level, and demands that they respect the same demands.
Many other authors have put forward other solutions to this problem of accountability, including Dell’Unto et al. (2013) and the very important London Charter (Denard 2009). But it is interesting to note how 20 years after Sims first postulate the issue, we are still struggling to implement the solutions. And until we do find a way to make our work accountable and replicable, we will struggle to bridge the gap between computational and traditional archaeology.
References:
Dell’Unto, N., Leander, A. M., Ferdani, D., Dellepiane, M., Callieri, M., Lindgren, S. (2013). Digital reconstruction and visualisation in archaeology: case-study drawn from the work of the Swedish Pompeii Project. Digital Heritage International Congress pp.621-628.
Denard, H. (2009). The London Charter: for the computer-based visualisation of cultural heritage.
Ducke, B. (2012). Natives of a connected world: free and open source software in archaeology. World Archaeology 44:4 pp.571-579.
Gutierrez, D., Sundstedt, V., Gomez, F. and Chalmers, A. (2006). Dust and light: predictive virtual archaeology. Journal of Cultural Heritage 8 pp.209-214.
Hafer, L. and Kirkpatrick, A. E. (2009). Assessing Open Source Software as a Scholarly Contribution. Communication of the ACM Vol.52 No.12 pp.126-129.
Marwick, B. (2016). Computational Reproducibility in Archaeological Research: Basic Principles and a Case Study of Their Implementation. Journal of Archaeological Method and Theory pp.1-27.
Sims, D. (1997). Archaeological models: pretty pictures or research tools? IEEE Computer Graphics and Applications Vol.17 No.1 pp.13-15.
Sylaiou, S., Fotis, L., Kostas, K. and Petros, P. (2009). Virtual museums, a survey and some issues for consideration. Journal of Cultural Heritage.
2 thoughts on “On the Subject of Accountability”