Document

Author
Robert J. Houghton and Chris Wragg
Abstract
Collaborative interpretation and understanding of complex and uncertain information is a pervasive and growing challenge across many industries and domains from defence and ‘blue light’ services to commerce and government. We carried out two studies to evaluate the Adaptive Report Generation Assistant (ARGA), a piece of collaborative software designed to aid team sensemaking by supporting coding of information inputs and visualisation of outputs. In the first study, ARGA was contrasted with pen and paper processes in laboratory trials and in a second, and more ecologically valid trial, ARGA was contrasted with the use of generic shared electronic documents by two larger teams of expert analysts. In both cases, in addition to usability analysis and evaluation of final report quality, team activity was also analysed with reference to recordings, post-hoc interviews and examination of the cognitive artefacts produced. It was found that by structuring input and interpretation phases of the activity and offering greater flexibility in the rework of both ontologies for input and visualisations of output, groups using ARGA generally produced better quality analyses through avoiding premature fixedness and confirmation bias. However, a persistent problem across all groups lay in maintaining consistent visibility of relative information quality and credibility. The findings imply that sensemaking quality can be enhanced by interventions that reduce the administrative and clerical demands of information management and representation.