Human-machine interaction


Human Machine Teaming and Human Centred Design

Document

Author
Sue Ottley, Jaina Mistry, Kerry Tatlock & Chris Vance
Abstract
Human Machine Teaming (HuMaT) systems are expected to become increasingly important and common in the UK military domain as a result of changes in both technology and the Battlespace; however the role of the human user is expected to be retained, though it may change. HuMaT systems are sociotechnical systems where needs and behaviours may be emergent and hard to predict from previous systems behaviours, therefore framing formal requirements prior to design can be difficult and sometimes counterproductive. The complex and variable nature of interactions within Human Machine Teams (HMT) means that traditional technology centric design processes do not easily support them, if the best use is to be made of both human and technology actors then a process more focused on user goals and constraints is required. This paper outlines the considerations for applying a Human Centred Design (HCD) process to developing HuMaT systems and how HCD addresses the challenges presented by a technology centric process.

 


The influences of flight deck interface design on pilot situation awareness and perceived workload

Document

Author
Wen-Chin Li, Andreas Horn & Jingyi Zhang
Abstract
There are numerous accidents and incidents related to mode confusion. Autothrottle and autopilot are traditionally separated systems on the flight deck, however they can interact through the physics of flight. Avionic engineers have been applying automation to reduce pilot’s workload and enhance flight safety. While basic automated systems performed quite simple tasks such as holding altitude or heading, modern flight guidance and control systems typically have different modes of operation. A new flight mode annunciator (FMA) concept was compared with traditional FMA in conjunction with eye-tracking and NASA-TLX measurements. The experiment involved 17 participants, aged between 22 and 47 years (M = 29.18, SD = 6.73). The results showed that the augmented display significantly reduced the perceived workload on mental demand, temporal demand, and effort by NASA-TLX; also increasing performance and situation awareness during climbing turn on the perception of mode changing by call-out. Furthermore, participant’s fixation duration has significant differences on airspeed and altitude indicators between traditional design and augmented design by adding visual cues of a green border. The relatively high cognitive effort to interpret the existing flight mode annunciation is certainly a contributing factor in mode confusion. The significant differences in fixation duration and subjective workload demonstrate the potential benefits of the proposed visualization cue on the FMA. By simply highlighting the parameters that are controlled by the automation, it greatly reduces pilot workload and enhances situation awareness in mode changing.

 


Evaluating Virtual Reality 360 Video to Inform Interface Design for Driverless Taxis

Document

Author
David R. Large, Madeline Hallewell, Madelaine Coffey, Jenna Evans, Leah Briars, Catherine Harvey & Gary Burnett
Abstract
Autonomous, self-driving taxis are a commonly cited solution for future mobility but inevitably raise myriad human-centred design and usability challenges. However, conducting usability and user experience studies in imagined, future vehicles is troublesome, given the absence of safe, road-worthy exemplars (and indeed, recent COVID-19 restrictions). Applying a novel virtual reality (VR) Wizard-of Oz methodology, fifty-two participants were presented with immersive 360° videos (VR360-videos), which captured interactions with a range of potential human-machine interface (HMI) solutions, aiming to address vehicle identification and on-boarding tasks. Interactions and experiences were acted out by members of the research team, with potential HMIs and constituent tasks informed by a participatory design study, literature review and user requirements elicitation exercise also conducted by the authors. To evaluate the methodological approach, the VR360-videos were presented using either a VR-headset or laptop web-browser, with respondents offered the opportunity to participate in the laboratory or from their home, thus affording a 2×2 between-subjects study design with fixed factors of Method and Location. An overview of the user experience results indicated that all HMIs were generally received positively, with subjective ratings unaffected by Method or Location. Qualitative data suggested potential for greater immersion/presence using a VR-headset, although there were indications of higher “simulator sickness” using this approach. The study provides a novel methodology to deliver immersive user experiences for evaluation. Results will be used to inform HMI design and future evaluations associated with remaining journey stages (in-transit, arrival, payment etc.).

 


Subjective Measures on Task Complexity Using Touchscreens in Flight Operations

Document

thumbnail of Subjective Measures on Task Complexity Using Touchscreens in Flight Operations

Author
Ben Wright, James Blundell, Wojciech Tomasz Korek, & Wen-Chin Li
Abstract
The following preliminary study uses subjective measures of situational awareness, workload, and system usability to assess the effect of touchscreen flight deck displays in simple and complex flying environments during a simulated flight task. Eighteen participants were evaluated whilst flying a simulated aircraft, conducting both simple and complex flight operations. Results showed that situational awareness improved, and perceived workload was maintained, when task complexity was increased during touchscreen interaction on the flight deck. This was likely driven by touchscreens providing increased attentional supply. This improves the flight deck humanmachine interface (HMI) from a pilot-centred perspective by improving access to task-relevant information. There was no significant change in levels of touchscreen usability as flight task complexity increased, once again ratifying the use of touchscreens in assisting cognitive function in some task types. The application and limitations of these findings is discussed.

 


Ghost Busting: A Novel On-Road Exploration of External HMIs for Autonomous Vehicles

Document

Author
David R. Large, Madeline Hallewell, Xuekun Li, Catherine Harvey & Gary Burnett
Abstract
The absence of a human driver in future autonomous vehicles means that explicit pedestrian-driver communication is not possible. Building on the novel ‘Ghost Driver’ methodology to emulate an autonomous vehicle, we developed prototype external human-machine interfaces to replace existing cues, and report preliminary, qualitative findings captured from a sample of pedestrians (n=64) who encountered the vehicle when crossing the road, as well as reflecting on the method.

 


A2/M2 Connected Corridor Connected Autonomous Vehicle Testbed

Document

Author
Matt Blackwell and Ahmad Jamal
Abstract
The A2/M2 Connected Corridor pilot is a flagship project which will contribute to industry knowledge and promote the UK as a market leader in Connected Autonomous Vehicles (CAV) and Cooperative Intelligent Transport Systems (C-ITS) technology. Working in collaboration with Highways England, the Department for Transport, Transport for London (TfL) and Kent County Council, we designed, installed and implemented one of the UK’s first pilot connected vehicle corridors on a live road, to demonstrate how we could improve people’s lives with safer, faster journeys. The project is part of a European initiative to create a network of interoperable, connected corridors for autonomous vehicles across the Netherlands, Belgium, UK and France, aiming to achieve seamless interoperability of services between the countries and ensure safer and more efficient mobility of people and goods. The key focus is on factors likely to determine user acceptance, engagement, trust, and likely continued usage of CAV HMIs.

 


Development of a Behavioural Markers System for Maritime Autonomous Surface Ship Operations

Document

thumbnail of Development of a Behavioural Markers System for Maritime Autonomous Surface Ship Operations

Author
Kirsty M. Lynch, Aaron P. J. Roberts, Mark S. Young, Victoria A. Banks, Dominic J. Taunton & Katherine L. Plant
Abstract
A prototype Behavioural Markers System (BMS) is being developed for the operation of Maritime Autonomous Surface Ships (MASS) to assess operators’ non-technical skills for working in a Remote Control Centre (RCC) and assess the behaviours of machine teammates. This paper outlines the initial prototype BMS it includes behavioural markers (BMs) for humans working within a human-machine team, as MASS systems are using higher levels of automation and are changing how the operators will interact with the automated systems. The BMS will also be extended to BMs for machine teammates which could be used in the design and evaluation of future MASS systems. Future work will focus on the further development of the prototype BMS adding further BMs for human-machine and machine-human interactions.

 


Allocation of Function in the era of Artificial Intelligence: a 60-year old paradigm challenged

Document

Author
Nick Gkikas
Abstract
The Fathers of the discipline of Ergonomics and Human Factors used their scientific research and real-life experiences of technological development during WWII and the first years of peace that followed to propose a set of principles for Human-Machine Interaction (HMI). These principles stood the test of time and informed common applications of the discipline, such as allocation of function between human and machine for many years. It is only recently with the advancement and generalisation of certain underlying technologies that forms of Artificial Intelligence (AI), machines and systems with non-deterministic behavioural characteristics became operational. The underlying specification of those machines and systems appear to challenge some of the underlying assumptions made by the Fathers of the discipline. The present article revisits those principles of HMI, identifies the changes in the underlying assumptions and discusses the implications of the changes identified to the discipline of Ergonomics and Human Factors.

 


Preliminary development of the Psychological Factors Assessment Framework for manufacturing human-robot collaboration

Document

Author
Iveta Eimontaitre and Sarah Fletcher
Abstract
Robots, although not new in manufacturing, are still only just being directly integrated with human operators. Although timely and measured human factors integration in technology development can increase its acceptance, the impacts on manufacturing operators are still largely unknown. The proposed work described in this paper discusses the SHERLOCK (seamless and safe human-centred robotic applications for novel collaborative workplace) project approach to human factors integration that aims to develop a standardised tool for evaluating the impacts of robotics in manufacturing (psychological factors assessment framework). Four industrial use case studies of new collaborative applications will allow investigations of changes in operators’ psychological states depending on the robot characteristics and assembly requirements. This analysis will enable the development of the framework, which will allow quicker assessment of psychological factors and recommendations for operator needs and requirements in a variety of manufacturing applications.

 


Human factors and the digital railway: Effecting and managing change through innovation and integration

Document

 

Author
Richard Bye
Abstract
With an infrastructure operating at full capacity, a testing political climate, and a mix of government, regulatory and public pressures, the GB railway is under more strain and scrutiny than ever before. Currently, more than 4.8 million journeys are made across the network every day, a number which is forecasted to increase by around 40% over the next 20 years. As such, the complex sociotechnical system that makes up Britain’s railway is in a constant state of flux, continually evolving to meet the everchanging demands of today, whilst anticipating the myriad needs of tomorrow. Without proactive interventions, the predicted growth in passenger journeys will erode the resilience of the railway, especially on the busiest parts of the network that are already characterised by an extreme density of train services. Passengers expect, and should receive, a right-time rail service all day, every day. However, minor disruptions frequently lead to congestion and delays, and result in losses to public satisfaction and confidence. Technological interventions to optimise system performance, whilst maintaining a continuous focus on passenger, workforce and public safety, require a robust and coordinated approach from the ergonomics and human factors (human factors) community. Human factors practitioners, in collaboration with frontline rail staff, engineers, system architects and policy makers, can innovate through practice and research to integrate digital railway (DR) technologies, and reduce environmental stressors, whilst maximising the return on investments at every level of the sociotechnical system. This work illustrates the application of systems ergonomics to the delivery of DR technologies within safety-critical work environments. The DR project is focused on deploying new technology to maximise train capacity on the existing infrastructure. To do this requires a comprehensive programme of human factors integration to effect the necessary cultural and organisational changes, and in doing so develop appropriate policies, regulations, standards and plans.

 


Optimising Operator Attention on the Maritime Platform Human Computer Interface

Document

Author
Julia CLARKE
Abstract
The ‘dark and quiet’ interface principle has been embedded in the design of the Platform Management System for the Royal Navy’s (RN) future maritime platforms. The aim is to minimise attention-getting features that could distract an operator and/or affect situational awareness. The Human Factors Team has worked with a range of stakeholders to design the content and presentation of system information to optimise operator attention during routine, abnormal and emergency operations.

 


Circles of Influence: How do you arrange 200 Performance Shaping Factors?

Document

thumbnail of Circles of Influence – How do you arrange 200 Performance Shaping Factors

Author
Adrian Wheatley
Abstract
The Circles of Influence model is a method of organising Performance Shaping Factors that encourages the user to consider the bi-directional way in which many variables might interact to influence human performance. The model is presented as a tool to aid Ergonomics and Human Factors practitioners when undertaking activities such as project and task scoping, requirements capture, risk assessment, and Human Reliability Assessment.

 


Classifying vessels using broadband sonar: considerations for future autonomous support

Document

Author
Faye McCabe, Prof. Chris Baber & Prof. Robert Stone
Abstract
Submarine Command Teams often rely on sensor systems such as sonar to gain situational awareness when operating below periscope depth. Classifying different vessels using broadband sonar relies on the analysis of aural characteristics to build up a target motion solution for each sonar contact. This process is inherently uncertain, and misclassification can be potentially fatal, resulting in collisions between vessels and submarines. This paper offers suggestions for artificially intelligent support which could be created and provided through the analysis of historically collected information about fishing vessels transmitted via satellite. These suggestions were formed through an interview with a subject matter expert and the analysis of a report compiled about a collision that occurred between a Royal Navy submarine and fishing vessel in 2015.

 


Using myoelectric signals for gesture detection: a feasibility study

Document

Author
Farshid AMIRABDOLLAHIAN, Michael WALTERS, Rory HEFFERNAN, Sarah FLETCHER, Phil WEBB
Abstract
With the technological advances in sensing human motion, and its potential to drive and control mechanical interfaces remotely, a multitude of input mechanisms are used to link actions between the human and the robot. In this study we explored the feasibility of using the human arm’s myoelectric signals with the aim of identifying a number of gestures automatically. We deployed k-nearest neighbour’s algorithm in machine learning to train and later identify gestures, and achieved an accuracy of around 65%. This indicates potential feasibility while highlighting areas for improvement both in accuracy and utility/usability of such approaches.

 


Empathy consideration in the design of natural language interfaces for future vehicles

Document

Author
Ben Anyasodo and Gary Burnett
Abstract
While the future of transportation paints a picture of seamless understanding of the passenger goals by the vehicle, it also exposes a gap in understanding what the human-machine engagement must become for a more natural in-car experience. Through natural language interfaces, a human-like interaction is possible. This raises the question is there a way that machines can be more “empathic”? If so, would this make for a more natural human-machine interaction? And how can we design usable natural language interfaces (specifically speech systems) to achieve this? Especially because although humans are emotional beings, machines are not. This paper explores the concept of empathy for speech systems by investigating the human-human empathy model and proposing design considerations. To achieve this, we interviewed professional persons who have to show empathy as part of their work. Seven themes were generated from the responses that form a usable framework for a human-machine empathy which could be applied to natural language speech system design.

 


Assessing pilots’ mental workload using touchscreen inceptor for future flight deck design

Document

Author
Joao Paulo Macedo, Kyle Hu, Rani Quiram & Samarth Vilas Burande
Abstract
Touchscreen displays are one of the pillars of future flight deck design and it is foreseen that at some point traditional flight control inceptors will be modified to a touchscreen version. However, this transition can only be safe and successful with due regard for human performance implications. This study addresses it by comparing pilots’ mental workload for a traditional sidestick and an innovative touchscreen control inceptor. The results indicate that the new technology increases pilot workload, suggesting that further development is required to use it in future flight decks.

 


Human-robot interaction: Assessing the ergonomics of tool handover

Document

Author
George V. Papadopoulos & Michail Maniadakis
Abstract
This work focuses on human-robot collaboration for assembly tasks, examining the position of robot-to-human handover of objects. A simulation environment is implemented to ergonomically evaluate the expected posture of the human arm in hypothetical delivery positions in the 3D space.

 


Design of human-machine teams using a modified CoActive Design Method

Document

Author
Professor Chris Baber, Chris Vance
Abstract
Designing Human-Machine Teams not only requires an appreciation of which functions might be appropriately allocated to human or machine, but also how each team member can make sense of the functions performed by it and its team-mates. The aim of this paper is to present an approach to Allocation of Function within Human-Machine Teams (HuMaT) that can be applied across different Levels of Automation and which can explore information management issues in such teams. To do this, we present a modification of the CoActive Design method. A key aspect of the modification lies in the focus on information exchange and issues relating to common ground in HuMaT. In this paper, Cognitive Work Analysis is used as the basis for the CoActive Design Method to explore how different Levels of Automation can be conceptualised. The benefit of such an approach is that is provides a decomposition of functions such that it is possible to see how, even in systems that have high-levels of autonomy, there remains a role for human operators. Taking the example of an in-car navigation system, we illustrate how each member uses information to support the functions allocated to them, and how common ground develops in the team.

 


How sensemaking by people and artificial intelligence might involve different frames

Document

Author
Hebah Bubakr and Chris Baber
Abstract
Sensemaking can involve selecting an appropriate frame to explain a given set of data. The selection of the frame (and the definition of its appropriateness) can depend on the prior experience of the sensemaker as much as on the availability of data. Moreover, artificial intelligence and machine learning systems are dependent on knowledge elicited from human experts, yet, if we trained these systems to perform and think in the same way as a human, most of the tools will be unacceptable to be used as criterion because people consider many personal parameters that a machine should not use. In this paper, we consider how an artificial intelligence system that can be used to filter curriculum vitae (or résumés) might apply frames that result in socially unacceptable decisions.

 


Cultural Aspects for the User Interface Design of Health and Fitness Apps

Document

Author
Fiona Zheng and Setia Hermawati
Abstract
Cultures influence preferences towards user interface (UI) which prompted the emergence of the culture-based UI design approach. Hofstede’s cross-cultural theory identified Hong Kong as having a unique culture orientation in comparison to western countries or mainland China. This study aimed to investigate Hong Kongese preference on the UI of health and fitness apps. Three human factors experts systematically analysed two comparable apps designed by mainland Chinese and Western companies. The results were then used to guide an online survey (n=103) exploring the preferences of Hong Kongese. The systematic analysis showed that UI differences observed between the two apps corresponded well with cultural dimension differences between the United States and Mainland China; suggesting an unmet need for culturally sensitive UI. The online survey showed that UI preferences are also significantly affected by age and gender (p < 0.05); suggesting the culture-based UI design approach alone was insufficient to guide UI design.

 


Pro-Social Mobility: Using Mozilla Hubs as a Design Collaboration Tool

Document

Author
David R. Large, Madeline Hallewell, Leah Briars, Catherine Harvey & Gary Burnett
Abstract
This paper showcases the novel application of Mozilla Hubs in the context of interface design for future, autonomous taxis. It demonstrates that repurposing pro-social virtual reality as a design collaboration tool enables an embodied and spatialised experience affording the co-creation and visualisation of novel interfaces, scaffolded by real-world social dynamics, but unfettered by its physical limitations. The approach has proved to be beneficial during the COVID-19 pandemic, but it also provides genuine opportunity for virtual, “non-contact” collaborative research beyond this.