Autonomous systems
Space Utilisation and Comfort in Automated Vehicles: A Shift in Interior Car Design?
Using SUS for Current and Future AI
Document | Author Richard Farry |
Abstract The System Usability Scale (SUS) was assessed for its relevance and ease of use for assessing an AI capable of human-like interaction. Participants used SUS to assess Outlook, a contemporary consumer-grade AI interaction partners (smartphone digital assistants), and human teammates as a proxy ‘system’ for future human-like AI interaction partners. The results show that participants considered SUS to be relevant and easy to use for contemporary consumer-grade AI interaction partners, but not for human teammates. However, there was no meaningful difference in their ability to apply SUS between contemporary digital assistants, human teammates, and an email client. Thus, SUS can be used effectively for all of these kinds of systems. |
Manual control versus management-by-consent in managing multiple threats
Document | Author Chris Baber & Natan S. Morar |
Abstract As the use of Uninhabited Aerial Systems (UAS) increases, e.g., for commercial delivery, for surveillance or for hostile action, there are challenges of monitoring and appropriately responding to a crowded airspace. Providing automated support could reduce the challenges. However, such support might also have an impact on the strategy that a human operator deploys. In this paper we present a simulation of Air Defence (in the form of a single-player, interactive game) which is used to study human performance under three conditions. Provision of decision support, i.e., through management by consent, produced better performance, even though it provided a limited situation awareness. A hybrid display produces performance that is superior to the manual control condition and similar to the management by consent condition. We note that provision of the air picture alone resulted in a different form of suboptimal performance, in which sensitivity was significantly lower. In this respect, providing the decision support (in the form of the polygon display) helps to limit tendency for false alarms. |
Clinician perspectives around automating the Emergency Department triage process
Document | Author Katherine L. Plant, Beverley Townsend, & OlTunde Ashaolu |
Abstract Healthcare has arguably been the sector most impacted by the Covid-19 pandemic, leaving Emergency Department (ED) medical teams overworked and understaffed. An automated system for ED triage has been developed to help alleviate some of these pressures. Eight ED clinicians were interviewed to capture their views of the automated system. Insights were generated around where this system might add value and areas of challenge or concern. These findings will be used to refine the prototype for end-user testing and support the development of training material for clinicians. |
Using immersive simulation to understand and develop warfighters’ cognitive edge
Document | Author Diane POMEROY, Justin FIDOCK, Luke THIELE and Laura CARTER |
Abstract The Australian Army recognises personnel need a “cognitive edge” over any adversary. To better understand cognitive performance of military personnel in current and future land operating environments and inform training requirements, we have created an immersive tactical team simulator representing possible elements of the future operating environment, including novel use of technologies by adversaries. The most recent study analysed behaviours of two military teams, each consisting of a three vehicle platoon. Examination of individual and team strategies identified the decision making approaches adopted by the individual teams in response to novel and unexpected threats in a high tempo situation. |
Quantified minds: Predicting human functional state for human-machine teaming
Document | Author Kate Ewing & Clare Borras |
Abstract A new dawn of intelligent machines has re-energised the concept of human-machine teaming (HMT) whereby humans, and autonomous systems, collaborate towards a shared operational goal. Across Defence, Human Factors specialists will be challenged to integrate human-autonomy teams into already complex systems for which knowing the functional state of human teammates will be critical to system optimisation. Presently, innovation in machine learning and data collection methods is making human cognition more available to operational settings than ever before. This paper overviews the state of the art in techniques for estimating human functional state from the perspective of designing complex military systems involving artificially intelligent (AI) agents. Considerations are provided for designers seeking to quantify variables such as mental workload, situation awareness (SA) or the level of demand upon particular communication modes, whether for system operation or design and evaluation. Finally, some examples of methods used in HMT research are presented along with a speculative look at future influences upon the specification of human functional state for use with autonomy in Defence. |
Introducing an Autonomous Crewmember
Document | Author Helen MUNCIE |
Abstract |
Summoning the demon? Identifying risks in a future artificial general intelligence system
Document | Author Paul M Salmon, Brandon King, Gemma J. M Read, Jason Thompson, Tony Carden, Chris Baber, Neville A Stanton & Scott McLean |
Abstract There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop controls. In this article we describe the use of a many model systems Human Factors and Ergonomics (HFE) approach in which three methods were applied to identify risks in a future ‘envisioned world’ AGI-based uncrewed combat aerial vehicle (UCAV) system. The findings demonstrate that there are many potential risks, but that the most critical arise not due to poor performance, but when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it. |
Human Factors Guidance for Robotic and Autonomous Systems (RAS)