Artificial intelligence


How sensemaking by people and artificial intelligence might involve different frames

Document

Author
Hebah Bubakr and Chris Baber
Abstract
Sensemaking can involve selecting an appropriate frame to explain a given set of data. The selection of the frame (and the definition of its appropriateness) can depend on the prior experience of the sensemaker as much as on the availability of data. Moreover, artificial intelligence and machine learning systems are dependent on knowledge elicited from human experts, yet, if we trained these systems to perform and think in the same way as a human, most of the tools will be unacceptable to be used as criterion because people consider many personal parameters that a machine should not use. In this paper, we consider how an artificial intelligence system that can be used to filter curriculum vitae (or résumés) might apply frames that result in socially unacceptable decisions.

 


Developing a Human Factors / Ergonomics guide on AI deployment in healthcare

Document

thumbnail of Developing a Human Factors Ergonomics guide on AI deployment in healthcare

Author
Marie E. Ward, Mark Sujan, Rachel Pool, Kate Preston, Huayi Huang, Angela Carrington, Nick Chozos
Abstract
Members of the Chartered Institute of Ergonomics and Human Factors (CIEHF) Digital Health and AI Special Interest Group (SIG) identified a need to provide health and social care professionals with an accessible guide to apply a systems approach in the design of healthcare AI tools. The CIEHF Digital Health and AI SIG came together to co-design a new guidance document: ‘AI deployment in healthcare – beginning your journey with Human Factors / Ergonomics (HF/E) in mind to support the integration of AI into care practices. A guide for health and social care professionals with an interest in AI.’ Group members come from health and social care and HF/E backgrounds. The guide is structured using the Systems Engineering Initiative for Patient Safety (SEIPS) framework.

 


Human Factors Guidance for Robotic and Autonomous Systems (RAS)

Document Author Claire Hillyer, Hannah State-Davey, Nicole Hooker, Richard Farry, Russell Bond, James Campbell, Phillip Morgan, Dylan Jones, Juan D. Hernández Vega & Philip Butler
Abstract This paper outlines recent (2021/2022) work to produce Human Factors (HF) guidance to support the design, development, evaluation, and acquisition of Robotic and Autonomous Systems.

 


The role of Human Factors and Ergonomics in AI operation and evaluation

Document

Author
Nikolaos Gkikas, Paul Salmon & Christopher Baber
Abstract
The present paper sets the scene for a recorded workshop exploring the critical role of Human Factors and Ergonomics in the development, operation and evaluation of Artificial Intelligence. First, we lay out some foundations of the multidisciplinary developments commonly placed under the umbrella of “Artificial Intelligence/AI” and propose some fundamental definitions to facilitate the structure of our arguments and the foundations for the workshop. Then we explore the role of Human Factors and Ergonomics methods in ensuring that AI systems contribute to our disciplinary goal of enhancing human health and wellbeing. In closing we propose a research agenda designed to ensure that Human Factors and Ergonomics is applied in future AI developments.

 


The Process of Training ChatGPT Using HFACS to Analyse Aviation Accident Reports

Document

thumbnail of The Process of Training ChatGPT Using HFACS to Analyse Aviation Accident Reports

Author
Declan Saunders, Kyle Hu & Wen-Chin Li
Abstract
This study investigates the feasibility of a generative-pre-trained transformer (GPT) to analyse aviation accident reports related to decision error, based on the Human Factors Analysis and Classification System (HFACS) framework. The application of artificial intelligence (AI) combined with machine learning (ML) is expected to expand significantly in aviation. It will have an impact on safety management and accident classification and prevention based on the development of the large language model (LLM) and prompt engineering. The results have demonstrated that there are challenges to using AI to classify accidents related to pilots’ cognitive processes, which might have an impact on pilots’ decision-making, violation, and operational behaviours. Currently, AI tends to misclassify causal factors implicated by human behaviours and cognitive processes of decisionmaking. This research reveals the potential of AI's utility in initial quick analysis with unexpected and unpredictable hallucinations, which may require a domain expert’s validation.

 


2B or not 2B? The AI Challenge to Civil Aviation Human Factors

Document

thumbnail of 2B or not 2B The AI Challenge to Civil Aviation Human Factors

Author
Barry Kirwan
Abstract
Artificial Intelligence (AI) holds great promise for all industries in improving safe performance and efficiency, and civil aviation is no different. AI can potentially offer efficiency improvements to reduce delays and aviation’s carbon footprint, while adding safety support inside the cockpit, enabling single pilot operations and the handling of drone operations in urban environments. The European Union Aviation Safety Agency (EASA) has proposed six categories of Human-AI teaming, from machine learning support to fully autonomous AI. While AI support may in some cases be treated as ‘just more automation’, one category in particular, Collaborative AI (category 2B) considers the case of AI as an autonomous ‘team-mate’, able to take initiative, negotiate, reprioritise and execute tasks. This category pushes the envelope when it comes to contemporary Human Factors evaluation of human work systems. The question arises, therefore, of whether Human Factors is sufficiently well equipped to support the evaluation and performance assurance of such new concepts of operation, or whether we need new techniques and even new frameworks for Human-AI teaming design and assessment. Four future Human-AI Teaming use cases are considered to help gauge where Human Factors remains fit-for-purpose, where it can be modified to be so, and where we may need entirely new techniques of performance assessment and assurance.

 


Using SUS for Current and Future AI

Document

Author
Richard Farry
Abstract
The System Usability Scale (SUS) was assessed for its relevance and ease of use for assessing an AI capable of human-like interaction. Participants used SUS to assess Outlook, a contemporary consumer-grade AI interaction partners (smartphone digital assistants), and human teammates as a proxy ‘system’ for future human-like AI interaction partners. The results show that participants considered SUS to be relevant and easy to use for contemporary consumer-grade AI interaction partners, but not for human teammates. However, there was no meaningful difference in their ability to apply SUS between contemporary digital assistants, human teammates, and an email client. Thus, SUS can be used effectively for all of these kinds of systems.

 


An approach for modelling sociotechnical influences in mixed human-artificial agent workforce design

Document

Author
Ashleigh Brady and Neelam Naikar
Abstract
Advances in intelligent technologies have made it feasible to consider future workforces with a mix of human and sophisticated artificial actors. During periods of significant societal transformation, organisations must be responsive to a range of public and governmental concerns in order to remain viable or effective. The sociotechnical influences space (SIS) models the social, psychological, cultural, and technological factors that must be considered in designing a future workforce that is not only safe, productive, and healthy, but also one that is acceptable to society. While these factors are largely studied in isolation by specialists in different disciplines, this model considers how the confluence of factors can shape the outcomes that are reached. The model utilises a representational scheme that captures the relevant sociotechnical factors at different levels of the societal system, highlighting the stratum at which individual factors are open to modification and should therefore be addressed. The model also captures links or influences between sociotechnical factors, both within and across system levels, identifying how factors interact to produce desirable workforce outcomes of safety, productivity, health, and acceptability. A proof-of-concept study demonstrates how the SIS could be utilised to model sociotechnical influences of significance in mixed human-artificial agent workforce design, focusing on the Royal Australian Air Force as a hypothetical example. If such an approach is utilised, it should provide organisations with a systematic basis for informing policy development and for identifying organisational bodies and actors who, through their spheres of influence and responsibility, can shape the outcomes that are reached. Through these avenues, the range of sociotechnical issues can be addressed, preparing people and processes to capitalise on the benefits of a novel technological future rapidly and successfully—in a way that is safe, productive, healthy, and acceptable to society.

 


Comparing User Interface Designs for Explainable Artificial Intelligence

Document

thumbnail of Comparing User Interface Designs for Explainable Artificial Intelligence

Author
Ionut Danilescu & Chris Baber
Abstract
A well-known approach to Explainable Artificial Intelligence (XAI) presents features from a dataset that are important to the AI system’s recommendation. In this paper, we compare LIME (Local Interpretable Model-free Explanation), to display features from a classifier, with a radar plot, to show relations between these features. Comparative evaluation (with N = 20) shows LIME provides more correct answers, has a higher consistency in answers, and higher rating of satisfaction. However, LIME also showed lower sensitivity (using signal detection), a slightly more liberal response bias, and had a higher rating of subjective workload. Evaluating user interface designs for XAI needs to consider a combination of metrics, and it is time to question the benefit of relying only on features for XAI.

 


Assessment of user needs for a sepsis fluid management Artificial Intelligence tool

Document

thumbnail of Assessment of user needs for a sepsis fluid management Artificial Intelligence tool

Author
Kate Preston, Emma Dunlop, Aimee Ferguson, Calum MacLellan, Feng Dong & Marion Bennie
Abstract
Artificial intelligence (AI) technology has the potential to support clinical decisions for sepsis fluid management. However, to ensure the full benefit of the technology is realised, a human factors approach, utilising a work system model, can be applied from the outset in parallel with the AI development to ensure the technology is created for the setting within which it will be integrated.

 


How people misinterpret answers from Large Language Models

Document

thumbnail of How people misinterpret answers from Large Language Models

Author
Yuzhi Pan and Chris Baber
Abstract
We presented probability problems to two Large Language Models (LLMs) and asked human judges to evaluate the correctness of the outputs. Neither LLM achieved 100% on the questions but participants did not always spot the errors these made. Two types of human error were identified: i. the LLM answer is correct, but the participant thought it was wrong (especially with the smaller LLM); ii. the LLM answer was wrong, but participants thought it was correct (especially with the larger LLM). Participants tended to trust the LLM when they were unsure how to answer a question and the LLM provided an answer that seemed reasonable and coherent (even if it is actually wrong)

 


Allocation of Function in the era of Artificial Intelligence: a 60-year old paradigm challenged

Document

Author
Nick Gkikas
Abstract
The Fathers of the discipline of Ergonomics and Human Factors used their scientific research and real-life experiences of technological development during WWII and the first years of peace that followed to propose a set of principles for Human-Machine Interaction (HMI). These principles stood the test of time and informed common applications of the discipline, such as allocation of function between human and machine for many years. It is only recently with the advancement and generalisation of certain underlying technologies that forms of Artificial Intelligence (AI), machines and systems with non-deterministic behavioural characteristics became operational. The underlying specification of those machines and systems appear to challenge some of the underlying assumptions made by the Fathers of the discipline. The present article revisits those principles of HMI, identifies the changes in the underlying assumptions and discusses the implications of the changes identified to the discipline of Ergonomics and Human Factors.

 


A framework for explainable AI

Document

Author
Chris Baber, Emily McCormick & Ian Apperley
Abstract
The issue of ‘explanation’ has become prominent in automated decision aiding, particularly when those aids rely on Artificial Intelligence (AI). In this paper, we propose a formal framework of ‘explanation’ which allows us to define different types of explanation. We provide a use-cases to illustrate how explanation can differ, both in human-human and human-agent interactions. At the heart of our framework is the notion that explanation involves common ground in which two parties are able to align the features to which they attend and the type of relevance that they apply to these features. Managing alignment of features is, for the most part, relatively easy and, in human-human explanation, people might begin an explanation by itemizing the features they are using (and people typically only mention one or two features). However, providing features without an indication of Relevance is unlikely to provide a satisfactory. This implies that explanations that only present features (or Clusters of features) are incomplete. However, most Explainable AI provides output only at the level of Features or Clusters. From this, the user has to infer Relevance by making assumptions as to the beliefs that could have led to that output. But, as the reasoning applied by the human is likely to differ from that of the AI system, such inference is not guaranteed to be an accurate reflection of how the AI system reached its decision. To this end, more work is required to allow interactive explanation to be developed (so that the human is able to define and test the inferences and compare these with the AI system’s reasoning).

 


Developing an Explainable AI Recommender System

Document

Author
Prabjot Kandola & Chris Baber
Abstract
We used a theoretical framework of human-centred explainable artificial intelligence (XAI) as the basis for design of a recommender system. We evaluated the recommender through a user trial. Our primary measures were the degree to which users agreed with the recommendations and the degree to which user decisions changed following the interaction. We demonstrate that, interacting with the recommender system, resulted in users having a clearer understanding of the features that contribute to their decision (even if they did not always agree with the recommender system’s decision or change the decision). We argue that the design illustrates the XAI framework and supports the proposal that explanation involves a two-stage dialogue.

 


Understanding the complex challenges in digital pathology and artificial intelligence integration

Document

thumbnail of Understanding the complex challenges in digital pathology and artificial intelligence integration (1)

Author
Haotian Yi, Gyuchan Thomas Jun, Diane Gyi & Samar Betmouni
Abstract
The hexagonal socio-technical framework was employed to understand the complex system of digital pathology (DP) workflow and artificial intelligence (AI) application while identifying the complex human factors challenges within the DP and AI integration process.

 


Treating uncertainty in sociotechnical systems

Document

Author
Mark Andrew
Abstract
Design goals guide design efforts but complex systems can lead to designers’ intentions being eclipsed. This paper’s proposition is that sociotechnical systems design offers scope for improved reliability and is built on three features of current design practice. First, design teams seek cooperative cognition to work together but inadequately understood outcome scenarios can impoverish joint understanding. Second, design team collaboration is bounded by innate psychological biases, which can spoil design decisions. Third, some views of risk in design thinking suffer from a limited conception of uncertainty and its influence. These constraints in design practice are examined (referencing the reach of Artificial Intelligence as one example design domain) and how such constraints may be addressed in design practice.

 


Applying artificial intelligence on electronic flight bag for single pilot operations

Document

Author
Takashi Nagasawa
Abstract
Single pilot operations (SPO) is an operational concept which commercial aviation industry is considering. Artificial Intelligence (AI) hosted on Electronic Flight Bag (EFB) has a potential to become a buddy for pilot in SPO. This short paper presents an operational concept of SPO with EFB/AI and further discussion based on empirical exploration to achieve the objective on integration AI with EFB design.

 


Summoning the demon? Identifying risks in a future artificial general intelligence system

Document

Author
Paul M Salmon, Brandon King, Gemma J. M Read, Jason Thompson, Tony Carden, Chris Baber, Neville A Stanton & Scott McLean
Abstract
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop controls. In this article we describe the use of a many model systems Human Factors and Ergonomics (HFE) approach in which three methods were applied to identify risks in a future ‘envisioned world’ AGI-based uncrewed combat aerial vehicle (UCAV) system. The findings demonstrate that there are many potential risks, but that the most critical arise not due to poor performance, but when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it.

 


Human performance and automated operations: A regulatory perspective

Document

Author
Linn Iren Vestly Bergh, Kristian Solheim Teigen & Fredrik Dørum
Abstract
The petroleum industry is becoming increasingly dependent on digital systems, and the companies have ambitious plans for increased use of digital technology – along the entire value chain. Increased levels of digitalisation present major opportunities for efficiency in the oil and gas industry and can also contribute to enhanced levels of resilience to major accident hazards. At the same time, new risks and uncertainties may be introduced. Based on developments in the industry and society in general, the Norwegian Petroleum Safety Authority (PSA) has in recent years pursued targeted knowledge development related to digitalisation and industrial cyber security. The PSA’s follow-up activities related to digitalisation initiatives in the industry have been based on input and experience from several knowledge development projects. In this paper we will give insight into the main regulatory strategies we have used to follow-up initiatives in the industry, present results from audits on automated drilling operations and discuss the results from the follow-up activities in light of current regulatory development.

 


Ergonomic constraints for astronauts: challenges and opportunities today and for the future

Document

Author
Martin Braddock, Konrad Szocik & Riccardo Campa
Abstract
Manned spaceflight is ergonomically constrained by living and working in a confined space in microgravity where astronauts on both short and long duration missions are exposed to daily radiation levels well above those received on Earth. Living in microgravity, especially on long duration missions aboard the International Space Station has deleterious physiological and psychological effects on astronaut health and astronauts may on just one mission receive exposure to a cumulative radiation dose normally received in a lifetime on Earth. It is unrealistic at present to contemplate continuous missions of greater than 1 year, and to mitigate against current ergonomic constraints, space agencies have outlined roadmaps to introduce artificial gravity and develop strategies for conferring human resistance to radiation. In parallel, the concept of whole brain emulation (WBE) and ‘uploading’ of human consciousness on to a platform within the rapidly growing field of artificial intelligence is one scenario which may remove the future requirement for a physical crew. This paper highlights incidents and accidents which have resulted in astronaut injury because of ergonomics in space, considers the timing of deployment of technology roadmaps and draws together multi-disciplinary fields to project a future whereby deep space travel may be manned by an e-crew, devoid of many of the established ergonomic boundaries that apply to human astronauts.