mARChive: Sculpting Museum Victoria’s Collections

Sarah Kenderdine, University of New South Wales, Australia/Hong Kong, , ,


mARChive is a new interface to the Museum Victoria’s collections that allows for interactive access to eighty thousand collection records as a situated experience—inside a 360-degree three-dimensional exhibition display screen. This paper describes the theoretical rationale and the collaborative and design process undertaken to create mARChive.

Keywords: interaction design, 3D, heterogenous collection, interactive narrative, embodiment

1. Introduction

The rapid growth in participant culture embodied by Web 2.0 has seen creative production overtake basic access as the primary motive for interaction with databases, archives, and search engines (Manovich, 2008). Intuitive exploration of diverse bodies of data allows users to find new meanings and reapplication of that data, rather than simply access to the information (NSF, 2007). This possibility for creative engagement poses significant experimental and theoretical challenges for the memory institutions and the storehouse of cultural archives (Del Favero et al., 2009). The structural model that has emerged from the Internet exemplifies a database paradigm where accessibility and engagement are constrained to point-and-click techniques where each link is the node of interactivity. Indeed, the possibility of more expressive potentials for interactivity and alternative modalities for exploring and representing data has been passed by, largely ignored (Kenderdine, 2010). In considering alternatives, this paper explores situated experiments emerging from the expanded cinematic that articulate, for cultural archives, a reformulation of database interaction, narrative recombination, and analytic visualization.

The challenges of what can be described as cultural data sculpting following from Zhao and Vande Moere’s ‘data sculpting’ (2008) are currently being explored through the mARChive project. In situ and in-the-round, mARChive (Figure 1) is a new interface to Museum Victoria’s collections resulting from an Australian Research Council Linkage grant between iCinema Research Centre University of New South Wales and the Museum (2011–2014). The grant, entitled Narrative reformulation of multiple forms of databases using a recombinatory model of cinematic interactivity, aims to investigate visual searching and emergent narratives by integrating the immense archive of museum collection data in a 360-degree three-dimensional space. The project, realized as mARChive, allows for interactive access to a data cloud as a situated experience—inside the Museum.

Fig 1. mARChive data browser: thematic display distributed by time. Photo: Volker Kuckelmeister 2014. 300w, 451w" sizes="(max-width: 1024px) 100vw, 1024px" /> Figure 1: mARChive data browser: thematic display distributed by time. Photo: Volker Kuckelmeister (2014).

For the past fifteen years, convergence has been a key driver of innovation in cultural agencies. Museums are in a somewhat unique position, at a crossroads between traditional practice and leading edge experimentation, in many fields. A key area of opportunity that Museum Victoria and its partners identified is the intersection between multimedia art forms and traditional museum exhibition practice. Importantly, Museum Victoria has actively encouraged the merging of artistic and visualisation technologies to create new interaction techniques to tell stories and present ideas in unique ways. Partnering with universities, artists, and technical experts has resulted in a rich mix of skills and disciplines. Collaboration and partnership are the future, as we are connected ever more tightly to each other and traditional boundaries are removed. This is challenging work, but the rewards are many and the outcomes more than we can ever achieve individually. Museum Victoria has collaborated with iCinema for several project that have won awards, including the 2013 ICOM-Australia award for PLACE-Hampi project (Linkage Project 2006 LP0669163) and the 2011 MUSE Award for Dynamic Earth (Linkage Project 2011 LP100100466).

2. Introducing mARChive

mARChive is developed using Applied Visualization Interaction Environment (AVIE), UNSW iCinema Research Centre’s landmark 360-degree stereoscopic interactive visualization environment. The base configuration is a cylindrical projection screen 4 meters high and 12 meters in diameter, a 12-channel stereoscopic projection system, and a 7.1 surround sound audio system. AVIE’s immersive mixed-reality capability articulates an embodied interactive relationship between the viewers and the projected information spaces (Figure 2). It uses active stereo projection solution and camera tracking. (For a full technical description, see McGinity et al., 2007).

Fig 2. The Advanced Visualization and Interaction Environment, iCinema.

Figure 2: The Advanced Visualization and Interaction Environment, iCinema

The researchers have developed an algorithmic application that ingests and visualizes eighty thousand heterogeneous digital records of objects from Museum Victoria collections selected out of a total collection of 16 million. mARChive creates a navigable interactive data landscape for visitors inside the Museum’s permanent 360-degree 3D display system. The intention of the project is to give users an intuitive and creative platform to engage with the wealth of cultural materials found at the Museum, when only a fraction of these objects are on display.

Once inside the display system, with 3D glasses on and tablet in hand (Figure 3), a visitor can select a single thematic collection from the eighteen themes available and browse the tens of thousands of images associated with that theme. The themes are diverse and include: Childhood & Youth, Indigenous, Cultural Diversity, Horology, and Medicine in Society. The image ‘cloud’ for each theme is distributed by time, around the 360-degree screen. Any single image may be selected and brought out of the data cloud at a much larger scale. Each image is then associated with a description and title. All images can be ‘zoomed in,’ effectively magnifying the content to give full range to the high resolution of the images (Figure 4). Through metadata (database relationships), each image is also related to many other images and across different themes. This matrix of dynamic relationships is visualized in response to the user’s actions. The mARChive application is designed as a single-user, multi-spectator interaction paradigm. Visitors use a tablet interface to elicit actions on the screen. The interactive data-scape is amplified by specific sonic reflections created from the Museum’s archive and in response to the user’s actions.

Fig 3. mARChive data browser: current interface. Photo: Volker Kuckelmeister 2014. 300w, 451w" sizes="(max-width: 1024px) 100vw, 1024px" /> Figure 3: mARChive data browser: current interface. Photo: Volker Kuckelmeister (2014).

Fig 4. mARChive data browser: image magnification. Photo: Volker Kuckelmeister 2014. 300w, 451w" sizes="(max-width: 1024px) 100vw, 1024px" /> Figure 4: mARChive data browser: image magnification. Photo: Volker Kuckelmeister (2014).

Through an infinite set of permutations, visitors can navigate unfolding narratives in the data landscape that are based on their specific, and emerging, interest. The application develops a new visual paradigm for the social and collaborative exploration of big datasets, inside the Museum. It is a situated participatory and collective framework that distinctly contrasts mARChive with cultural datasets found on the Internet. It re-presents a total view of the Museum’s collections available to visitors for the first time, taking advantage of high-resolution digital data existing in the archive.

3. Museum Victoria’s collections

mARChive sits within a contextual history of collecting and the Museum’s own digitization projects. Museum Victoria holds some seventeen million collection items, acquired over more than 150 years. Contained within are some of the most significant collections of Australian indigenous cultural material in the world, extensive natural science collections with a particular strength in material from southeastern Australia and surrounding waters, and an internationally unique collection representing Victoria’s historical and technological developments.

From its beginnings in 1854 as the National Museum of Victoria, the Museum initially collected Victorian and Australian animals, rocks, minerals, and fossils, as well as acquiring significant natural history collections from overseas. In 1870, the Industrial and Technological Museum (subsequently the Science Museum of Victoria) was established to collect and display items relating to science and technology. At about the same time, the National Museum of Victoria also began to actively acquire Australian Aboriginal cultural artefacts. Over a century later, in 1983, the Natural Science Museum of Victoria and Museum of Victoria amalgamated, and social history was added as a further collecting focus. The collections are testimony to the endeavors and decisions of the generations of curators employed at the Museum since its early beginnings, and include many items and discrete collections of international, national, state, and local scientific or cultural significance—for example, the collections of natural science specimens and the famous racing horse Phar Lap are examples of significant material from each of the main collections.

Collections are a museum’s core asset and the differentiator of museums from exhibition venues and other display venues and centres. Online delivery and presentation of collections is challenging and rewarding. Large legacy collections pose enormous issues for museums; the process of digitization, both in terms of creating a catalogue record as well as imaging the item, is a great challenge with limited resources. Still, there is little doubt that, once completed, many uses can be made of the digital records created. Museum Victoria has identified approximately 2 million items and collections from 17 million for digitization. Currently approximately 1.4 million items have been machine-readable records of varying quality. Perhaps 560,000 records are considered high quality, with extensive metadata and supporting documentation including images and some video.

The History and Technology Collections Online Project, completed three years ago, made sections of the Museum’s collections available in a rich and meaningful way for the first time. The History and Technology collections comprise some 260,000 objects, 300,000 images, and 42,000 items of trade literature. Considerable organizational change was required to deliver that project and has produced lasting outcomes and cultural change within the Museum’s curatorial areas. Collection items are the Museum’s most basic content building block online and in exhibitions. Already having had this rich resource online has made the mARChive project possible. mARChive now utilizes the digital outputs of the collection’s online project to produce a new way of interacting with the collection in a unique space, using new technology and ideas to allow visitors to the Museum to see the collection through fresh eyes and make discoveries and derive meaning in exciting and unexpected ways.

Often, museum collection data does not have strongly individualistic characteristics and, in the case of the History and Technology collection, the data has few iconic items. Instead, the strength of the collection lies in the sheer number of items, multiplicity, and variety. The primary challenge for mARChive was to bring order and navigability to more than eighty thousand images and text in an enjoyable cohesive way. One of the main challenges in the digitization of collections relates to scale: the following section outlines the constraints, opportunities, and affordances of taking digital collections to the big screen. These are increasingly common challenges in the world of heterogeneous data visualization, big data, and visual analytics.

4. Data visualization in immersive systems

Research into new modalities of visualizing data is essential for a world that produces and consumes digital data at unprecedented rates (Keim et al., 2006; McCandless, 2010). Existing techniques for interaction design in visual analytics rely upon visual metaphors developed more than a decade ago (Keim et al., 2008), such as dynamic graphs, charts, maps, and plots. Currently, interactive, immersive, and collaborative techniques to explore large-scale datasets lack adequate experimental development essential to the construction of knowledge in analytic discourse (Pike et al., 2009). Recent visualization research remains constrained to two-dimensional small-screen-based analysis and advances interactive techniques of “clicking,” “dragging,” and “rotating” (Lee et al., 2010; Speer et al., 2010). Furthermore, the number of pixels available to the user remains a critical limiting factor in human cognition of data visualizations (Kasik et al., 2009). The increasing trend towards research requiring ‘unlimited’ screen resolution has resulted in the recent growth of gigapixel displays. Visualization systems for large-scale data sets are increasingly focused on effectively representing their many levels of complexity. This includes tiled displays such as HIPerSpace at Calit2 and next-generation immersive virtual reality systems such as StarCAVE at UC San Diego (De Fanti et al., 2009) and Allosphere at UC Santa Barbara.

In general, however, the opportunities offered by interactive and 3D technologies for enhanced cognitive exploration and interrogation of high-dimensional data still need to be realized within the domain of visual analytics for digital humanities (Kenderdine, 2010; Kenderdine & Hart, 2011). The project described in this paper takes on these core challenges of visual analytics inside AVIE to provide powerful modalities for an omni-directional (3D, 360-degree) exploration of multiple heterogeneous datasets responding to the need for embodied interaction: knowledge-based interfaces, collaboration, cognition, and perception (as identified in Pike et al., 2009). A framework for ‘enhanced human higher cognition’ (Green, Ribarsky, & Fisher, 2008) is being developed that extends familiar perceptual models common in visual analytics to facilitate the flow of human reasoning. Immersion in three-dimensionality representing infinite data space is recognized as a prerequisite for higher consciousness, autopoesis (Maturana & Varela, 1980), and promotes non-vertical and lateral thinking (see Nechvatal, 2009). Thus, a combination of algorithmic and human mixed-initiative interaction in an omni-spatial environment lies at the core of the collaborative knowledge creation model explored.

The four projects discussed also leverage the potential inherent in a combination of ‘unlimited screen real-estate,’ ultra-high stereoscopic resolution, and 360-degree immersion to resolve problems of data occlusion and distribute the mass of data analysis in networked sequences revealing patterns, hierarchies, and interconnectedness. The omni-directional interface prioritizes ‘users in the loop’ in an egocentric model (Kasik et al., 2009). The projects also expose what it means to have embodied spherical (allocentric) relations to the respective datasets. These hybrid approaches to data representation also allow for the development of sonification strategies to help augment the interpretation of the results. The tactility of data is enhanced in 3D and embodied spaces by attaching audio to its abstract visual elements and has been well defined by researchers since Chion and others (1994). Sonification reinforces spatial and temporal relationships between data (e.g., the object’s location in 360-degrees/infinite 3D space and its interactive behavior; for example, see West et al., 2008). The multichannel spatial array of the AVIE platform offers opportunities for creating a real-time sonic engine designed specifically to enhance cognitive and perceptual interaction, and immersion in 3D. It also can play a significant role in narrative coherence across the network of relationships evidenced in the datasets.

5. Trajectories of mARChive

The challenge of displaying and making sense of more than eight thousand objects simultaneously from seventeen different thematic areas from diverse collections including indigenous material, natural sciences data, and social history and technology presented both theoretical and practical challenges. mARChive was designed and conceived before Gallery One in Cleveland (which displays three thousand objects), and the models for displaying large numbers of objects from museum collections were limited on the big screen. On the Web, SFMOMA’s Artscope (, which displays fewer than seven thousand objects, seemed the most successful, while the National Gallery of Australia’s experimental Web interface to the prints + printmaking collection was inspirational. Inside industry, Microsoft’s Photosynth and Seadragon provided inspiration.

Among the most significant precursor to mARChive are the T_Visionarium projects (2008) developed by iCinema. T_Visionarium II (produced as part of the ARC Discovery grant, ‘Interactive Narrative as a Form of Recombinatory Search in the Cinematic Transcription of Televisual Information’) uses twenty-four hours of free-to-air broadcast TV footage from seven Australian channels as its source material. This footage was analyzed by software for changes of camera angle, and at every change in a particular movie (whether it be a dramatic film or a sitcom), a cut was made, resulting in a database of twenty-four thousand clips of approximately 4 seconds each (Figure 5). Four researchers were employed to hand-tag each 4-second clip with somewhat idiosyncratic metadata related to the images shown, including emotion, expression, physicality, and scene structure, with metatags including speed, gender, colour, and so on. The result is five hundred simultaneous video streams looping every 4 seconds, and responsive to a user’s search.

Fig 5: T_Visionarium II in AVIE © UNSW iCinema Research Centre.

Figure 5: T_Visionarium II in AVIE © UNSW iCinema Research Centre

T_Visionarium can be framed by the concept of aesthetic transcription; that is, the way new meaning can be produced is based on how content moves from one expressive medium to another. The digital allows the transcription of televisual data, decontextualising the original and reconstituting it within a new artifact. As the archiving abilities of the digital allow data to be changed from its original conception, new narrative relationships are generated between the multitudes of clips, and meaningful narrative events emerge because of viewer interaction in a transnarrative experience where gesture is all-defining. The segmentation of the video reveals something about the predominance of closeups, lack of panoramic shots, and heavy reliance on dialogue in TV footage. These aesthetic features come strikingly to the fore in this hybrid environment. The spatial contiguity gives rise to news ways of seeing, and of reconceptualising in a spatial montage (Bennett, 2008). In T_Visionarium, the material screen no longer exists. The boundary of the cinematic frame has been violated, hinting at the endless permutations that exist for the user. Nor does the user enter a seamless unified space; rather, he or she is confronted with the spectacle of hundreds of individual streams.

Another cloud harnessed for the situated user is ECLOUD WWI, an interactive spatial browser for the rediscovery of personal cultural data based on the crowdsourced World War 1 archives of Europeana. ECLOUD WWI (2012) is a custom-designed 9-metre by 3.5-metre interactive 3D projection environment and application developed by the Applied Laboratory for Interactive Visualization and Embodiment (ALIVE), City University of Hong Kong, in partnership with Europeana’s 1914-1918 project (Figure 6). The installation activates over 40,000 images of war memorabilia ascribed to 2,500 individual stories collected (crowdsourced) between 2009-2013, in an ongoing project undertaken across Europe. The installation instantaneously aggregates the digital imagery and its associative metadata within a unique immersive viewing experience. The visualization strategies engaged in ECLOUD WWI signal opportunities for new curatorial practices and embodied museography, redeploying Internet data in situated museum settings. ECLOUD WWI applies an integrative pluralist approach to the juxtaposition of image and memory. The discussion then explores participatory framework of the installation, demonstrating the shift from single-source authorship of a linear heritage to shared authorship between user, algorithm, and data developed through strategies of recombinatory navigation and interactive narrative. The parallel presentation of historic objects in combination with subjective collected stories presents us the opportunity to redefine the creation of cultural memories. This inter-generational project also offers occasions for generative legacy-building in younger generations as the 2014 centenary of WWI is commemorated in the coming year (Kenderdine & McKenzie, 2013).

Fig. 6. ECLOUD WW1 © Applied Laboratory for Interactive Visualization and Embodiment, CityU, Hong Kong.

Figure 6: ECLOUD WW1 © Applied Laboratory for Interactive Visualization and Embodiment, City University of Hong Kong

Data sources

Using Museum Victoria’s History and Technology collections online project as the data source provided advantages and also introduced new challenges. The source data is structured for curatorial purposes and for presentation online assuming delivery via a browser. Delivery via a 12-metre by 4-metre high-resolution 3D cylinder had not been accounted for in the original data preparation process. The data is rich with extensive information about the objects and high-resolution images for over 50,000 of the 80,000 objects. Additional collections from the natural sciences (1,500 species profiles from the Field Guide Project) and indigenous cultures (1,000 objects from the First Peoples Exhibition) were added to enhance the research opportunities of combining disparate data sources.

For this project, the data schema that is built into the online collections system was exported as XML. It was essential to create a framework that maintained the integrity of the Museum collections efforts while still meeting the new demands of displaying and ordering on the AVIE screen. A mapping of the data was undertaken, which both simplified the descriptive content and additional fields (of the 208 possible) and concentrated the characteristics of the data into the eighteen themes. Previous efforts made to clean the exported data for use online meant this process produced only a few errors.

Design issues

The primary user challenge was how a non-expert audience, expecting an enjoyable experience, could access this huge volume of data in an unfamiliar interactive environment. Discussions on artificial intelligence assisted searching and modes for interaction, preoccupied the first year of the project as we all grappled with the complexity of the problem. Multiple users were considered and decided against after considerable discussion due to the confusing outcomes that the cylindrical AVIE space produces, the “in the round” navigation inherent in the design of the display system uses the sweep of screen to allow the ordering of information in a single flow across the more than eight thousand pixels. Allowing multiple users effectively meant breaking the one large screen into a series of smaller screens, thus taking away the key benefit of “unlimited” screen real-estate and negating the wonderful immersive experience that AVIE allows.

The programmers interpreted the design team’s decisions, translating them onto the ~9000-pixel by 1200-pixel resolution screen. Interaction design and the layout of the information on screen were key problems that we had to solve (Figure 7). The design process entailed putting a multidisciplinary group of people (maximum six) in a room for three to four hours at a time until we arrived at practical and usable options for the programmers to run with. This iterative design and development process undertaken three or four times over the course of approximately twelve months has resulted in an elegant and at times striking visual interface and user paradigm, as one navigates their way through the tens of thousands of objects in this small subset of the Museum Victoria collections.

Fig. 7. Whiteboard during the preliminary design process. Photo: Tim Hart 2013.

Figure 7: Whiteboard during the preliminary design process. Photo: Tim Hart (2013).

This image from the whiteboard shows how the basic layout of thematic rings representing a collection arranged initially by time proved a simple and effective way of “cramming” the entire dataset onto the screen at once. Analogies to the transporter rings from Stargate (the science-fiction series) possibly help explicate the design (Figures 8 and 9). The notion of an elevator allowing you to move freely through the collections provides a very compelling and immersive way of “getting a feel” for the scope and size of the collection.

Fig. 8. mARChive data browser: thematic rings. Photo: Volker Kuckelmeister 2014. 300w, 451w" sizes="(max-width: 1024px) 100vw, 1024px" /> Figure 8: mARChive data browser: thematic rings. Photo: Volker Kuckelmeister (2014).

Fig. 9. Stargate Transportation rings (

Figure 9: Stargate Transportation rings (

The single-user interface we have designed is simple and uses standard multi-touch gestures to remove as many barriers as possible; thus allowing someone with no experience of the interface to use it effectively after a very short time. The 3D nature of the AVIE system also allows users to delve deeper to access the many layers of information contained in the collection data. The experience is one of immersion, movement, and a feeling of wonder as uers explore and follow objects thematically, temporally, or through their interconnectedness. For the non-museum professional, it conveys the sense of scale and amazement at the scope and size of large museum collections, usually for the first time, a realization of Raiders of the Lost Ark‘s final scene if you like! (Figure 10).

Fig. 10. Raiders of the lost ark (

Figure 10: Raiders of the Lost Ark (

Handling the text descriptions associated with each object was challenging. Most descriptions contained over 250 words and presented serious challenges when displayed on the screen in AVIE. The team considered many options, most of which required impractical and massive rewriting to edit descriptions. One of the parameters for the project was that museum collection data could be “simply” plugged into the system and displayed—collaboration and data sharing also demand a simplicity to the data architecture. The solution was to generate a Wordle-like abbreviation from the description text—simple and elegant, it freed the screen design from text-heavy and difficult to navigate to a more open and image-rich design (Figure 11).

Fig. 11. mARChive data browser: wordle from items description. Photo: Volker Kuckelmeister 2014.

Figure 11: mARChive data browser: wordle from items description. Photo: Volker Kuckelmeister (2014).

6. Research significance

The approaches taken in both T_Visionarium II, ECLOUD WWI, and mARChive to the visualization of cultural data and socialized search interaction are qualitatively different. mARChive surpasses the current search interface and interaction industry standards and provides for the physical experience of search interaction with cultural data, co-present with other interested searchers and/or observers. Industry-standard search interaction has low (or no) interactivity and feedback or opportunity for observers; it is unidirectional interrogation through a series of windows, and search data is relegated to system and traffic logs for delayed processing. Most research on algorithms to improve search feedback or analyze search patterns is not visible to observers; it is embedded in application function or reporting. Searchers of cultural data operate mostly in isolation to each other, and search via collection Web-based OPAC, as well as the research into enhancing these interfaces, falls mostly into 2D graphic design. Visualization (e.g., SFMOMA, Cleveland Museum of Art) and shared socialization of searching cultural data (social tagging of cultural collection data; for example, at Brooklyn Museum, Powerhouse Museum, and Flickr) is delimited by its containment within the 2D of a computer screen. In the case of social tagging, the impersonal, invisible characteristics and (in general) light social bonds of Web-based participation also do not entirely overcome the inhibiting aspect of physical distance. The 3D space and physical proximity and immediacy of AVIE stimulate the uses of a new interaction design paradigm when supported by software for the recombination and interrogation of data. This project will provide a distinct and valuable platform for interactive narrative where the benefits of intimately and dramatically immersing and engaging participants offers rich rewards whether the purpose is scholarly, artistic, playful, etc.

The physical visualization of cultural data and interrogative search in an immersive space addresses the fundamental challenges of understanding about how embodiment, the resocialisation of space, and co-presence impact upon and enhance human cognition and the experience of searching cultural collection data. Inside this virtual knowledge space, the aesthetics of data visualisation (or sculpting) and modes for search interaction with, and navigating of, cultural data are further refined. The practical value and significance of this research is the ability to stimulate and realise stronger cross-disciplinary shared understanding; undertake cultural analysis; work collaboratively and problem solve, through observation and feedback; and participate in the realms of discipline-specific and interdisciplinary team-based scholarly research. It also reignites wider community interest in exploring cultural data by socialising and visually stimulating and augmenting the search experience. This project capitalises on the progress, international recognition, and value of its combined partnerships to bring together powerful research trajectories across the humanities. The research further consolidates Australia’s position as a world leader and innovator, extending its repertoire of interactive content/display mechanisms for cultural data visualisation, search interface design, and digital humanities research. This project transforms the way consumers and researchers search, engage with, and are informed by cultural collection data. This research is relevant beyond the galleries, libraries, archives, and museums (GLAM) sector in the cultural industries to research, education, media, theatre, and commercial entertainment institutions and organisations (that draw upon, enrich, repackage, and repurpose cultural content and new technologies).

7. Conclusion

mARChive will premiere at Museum Victoria in August 2014. Tracking algorithms on the system will allow us to understand how the users of the system navigate, dwell, traverse, and tunnel through the dataset, providing a matrix of associations and frequencies. Evaluation will be undertaken around the issue of embodied experience, and socialization using the I Sho U system (Kocsis & Kenderdine, 2013). These datasets will be used in further iterative design process and subject to further grant applications.


Bennett, J. (2008). “T_Visionarium: A users guide.” University of New South Wales Press Ltd.

Chion, M., et al. (1994). “Audio-vision.” Columbia University Press.

DeFanti, T. A., et al. (2009). “The StarCAVE, a third-generation CAVE & virtual reality OptIPortal.” Future Generation Computer Systems, 25(2), 169–178.

Del Favero, D., H. Ip, T. Hart, S. Kenderdine, J. Shaw, & P. Weibel. (2009). “Narrative reformulation of museological data: the coherent representation of information by users in interactive systems.” Australian Research Council Linkage Grant. PROJECT ID: LP100100466.

Green, T. M., W. Ribarsky, & B. Fisher. (2009). “Building and applying a human cognition model for visual analytics.” Information Visualization, 8(1), 1–13.

Kasik, D. J., et al. (2009). “Data transformations & representations for computation & visualization.” Information Visualization, 8(4), 275–285.

Keim, D. A., et al. (2006). “Challenges in visual data analysis.” Proc. Information Visualization (IV 2006). London: IEEE. pp. 9–16.

Keim, D. A., et al. (2008). “Visual analytics: Definition, process, & challenges.” Information Visualization: Human-Centered Issues and Perspectives. Berlin, Heidelberg: Springer-Verlag. pp. 154–175.

Kenderdine, S. (2010). “Immersive visualization architectures and situated embodiments of culture and heritage.” Proceedings of IV10 – 14th International Conference on Information Visualisation. July. London: IEEE. pp. 408–414.

Kenderdine, S., & T. Hart. (2011). “Cultural data sculpting: Omni-spatial visualization for large scale heterogeneous datasets.” In J. Trant and D. Bearman (eds.), Museums and the Web 2011: Proceedings. Toronto: Archives & Museum Informatics. March 31. Consulted February 16, 2014.

Kenderdine, S., & H. McKenzie. (2013). “A war torn memory palace: Animating narratives of remembrance.” Proceedings of the Digital Heritage International Congress. Marseille, France: IEEE. pp. 315–322.

Kocsis, A., & S. Kenderdine. (2013). “I-Sho-U: An innovative method for museum visitor evaluation.” In H. Din & S. Wu (eds.), Digital Heritage and Culture – Strategy and Implementation. Singapore: World Scientific Publishing Co. (in press).

Lee, H., et al. (2010). “Integrating interactivity into visualising sentiment analysis of blogs.” Proc. 1st Int. Workshop on Intelligent Visual Interfaces for Text Analysis, IUI’10.

Manovich, L. (2008). “The practice of everyday (media) life.” In R. Frieling (ed.), The Art of Participation: 1950 to Now. London: Thames and Hudson.

McCandless, D. (2010). “The beauty of data visualization” [Video file]. Consulted November 30, 2010.

McGinity, M., et al. (2007). AVIE: A versatile multi-user stereo 360-degree interactive VR theatre. The 34th Int. Conference on Computer Graphics & Interactive Techniques, SIGGRAPH 2007. August 5–9.

Maturana, H., & F. Varela. (1980). “Autopoiesis and cognition: The realization of the living.” In R. Cohen and M. Wartofsky (eds.), Boston Studies in the Philosophy of Science, 42. Dordrecht: D. Reidel.

National Science Foundation (NSF). (2007). Cyberinfrastructure vision for 21st century discovery. Washington: National Science Foundation.

Nechvatal, J. (2009). Towards an immersive intelligence: Essays on the work of art in the age of computer technology and virtual reality (19932006). New York, NY: Edgewise Press.

Pike, W. A., et al. (2009). “The science of interaction.” Information Visualization, 8(4), 263–274.

West, R., et al. (2008). “Sensate abstraction: Hybrid strategies for multi-dimensional data in expressive virtual reality contexts.” Proc. 21st Annual SPIE Symposium on Electronic Imaging, vol. 7238 (2009), 72380I-72380I-11.

Zhao J., & A. Vande Moere. (2008). “Embodiment in data sculpture: A model of the physical visualization of information.” International Conference on Digital Interactive Media in Entertainment and Arts (DIMEA’08), ACM International Conference Proceeding Series Vol. 349, Athens, Greece, pp. 343–350.

Cite as:
. "mARChive: Sculpting Museum Victoria’s Collections." MW2014: Museums and the Web 2014. Published February 21, 2014. Consulted .

Leave a Reply