Artificial intelligence


Summoning the demon? Identifying risks in a future artificial general intelligence system

Document

Author
Paul M Salmon, Brandon King, Gemma J. M Read, Jason Thompson, Tony Carden, Chris Baber, Neville A Stanton & Scott McLean
Abstract
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop controls. In this article we describe the use of a many model systems Human Factors and Ergonomics (HFE) approach in which three methods were applied to identify risks in a future ‘envisioned world’ AGI-based uncrewed combat aerial vehicle (UCAV) system. The findings demonstrate that there are many potential risks, but that the most critical arise not due to poor performance, but when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it.

 


Human performance and automated operations: A regulatory perspective

Document

Author
Linn Iren Vestly Bergh, Kristian Solheim Teigen & Fredrik Dørum
Abstract
The petroleum industry is becoming increasingly dependent on digital systems, and the companies have ambitious plans for increased use of digital technology – along the entire value chain. Increased levels of digitalisation present major opportunities for efficiency in the oil and gas industry and can also contribute to enhanced levels of resilience to major accident hazards. At the same time, new risks and uncertainties may be introduced. Based on developments in the industry and society in general, the Norwegian Petroleum Safety Authority (PSA) has in recent years pursued targeted knowledge development related to digitalisation and industrial cyber security. The PSA’s follow-up activities related to digitalisation initiatives in the industry have been based on input and experience from several knowledge development projects. In this paper we will give insight into the main regulatory strategies we have used to follow-up initiatives in the industry, present results from audits on automated drilling operations and discuss the results from the follow-up activities in light of current regulatory development.

 


An approach for modelling sociotechnical influences in mixed human-artificial agent workforce design

Document

Author
Ashleigh Brady and Neelam Naikar
Abstract
Advances in intelligent technologies have made it feasible to consider future workforces with a mix of human and sophisticated artificial actors. During periods of significant societal transformation, organisations must be responsive to a range of public and governmental concerns in order to remain viable or effective. The sociotechnical influences space (SIS) models the social, psychological, cultural, and technological factors that must be considered in designing a future workforce that is not only safe, productive, and healthy, but also one that is acceptable to society. While these factors are largely studied in isolation by specialists in different disciplines, this model considers how the confluence of factors can shape the outcomes that are reached. The model utilises a representational scheme that captures the relevant sociotechnical factors at different levels of the societal system, highlighting the stratum at which individual factors are open to modification and should therefore be addressed. The model also captures links or influences between sociotechnical factors, both within and across system levels, identifying how factors interact to produce desirable workforce outcomes of safety, productivity, health, and acceptability. A proof-of-concept study demonstrates how the SIS could be utilised to model sociotechnical influences of significance in mixed human-artificial agent workforce design, focusing on the Royal Australian Air Force as a hypothetical example. If such an approach is utilised, it should provide organisations with a systematic basis for informing policy development and for identifying organisational bodies and actors who, through their spheres of influence and responsibility, can shape the outcomes that are reached. Through these avenues, the range of sociotechnical issues can be addressed, preparing people and processes to capitalise on the benefits of a novel technological future rapidly and successfully—in a way that is safe, productive, healthy, and acceptable to society.

 


Allocation of Function in the era of Artificial Intelligence: a 60-year old paradigm challenged

Document

Author
Nick Gkikas
Abstract
The Fathers of the discipline of Ergonomics and Human Factors used their scientific research and real-life experiences of technological development during WWII and the first years of peace that followed to propose a set of principles for Human-Machine Interaction (HMI). These principles stood the test of time and informed common applications of the discipline, such as allocation of function between human and machine for many years. It is only recently with the advancement and generalisation of certain underlying technologies that forms of Artificial Intelligence (AI), machines and systems with non-deterministic behavioural characteristics became operational. The underlying specification of those machines and systems appear to challenge some of the underlying assumptions made by the Fathers of the discipline. The present article revisits those principles of HMI, identifies the changes in the underlying assumptions and discusses the implications of the changes identified to the discipline of Ergonomics and Human Factors.

 


Applying artificial intelligence on electronic flight bag for single pilot operations

Document

Author
Takashi Nagasawa
Abstract
Single pilot operations (SPO) is an operational concept which commercial aviation industry is considering. Artificial Intelligence (AI) hosted on Electronic Flight Bag (EFB) has a potential to become a buddy for pilot in SPO. This short paper presents an operational concept of SPO with EFB/AI and further discussion based on empirical exploration to achieve the objective on integration AI with EFB design.

 


Treating uncertainty in sociotechnical systems

Document

Author
Mark Andrew
Abstract
Design goals guide design efforts but complex systems can lead to designers’ intentions being eclipsed. This paper’s proposition is that sociotechnical systems design offers scope for improved reliability and is built on three features of current design practice. First, design teams seek cooperative cognition to work together but inadequately understood outcome scenarios can impoverish joint understanding. Second, design team collaboration is bounded by innate psychological biases, which can spoil design decisions. Third, some views of risk in design thinking suffer from a limited conception of uncertainty and its influence. These constraints in design practice are examined (referencing the reach of Artificial Intelligence as one example design domain) and how such constraints may be addressed in design practice.

 


Developing an Explainable AI Recommender System

Document

Author
Prabjot Kandola & Chris Baber
Abstract
We used a theoretical framework of human-centred explainable artificial intelligence (XAI) as the basis for design of a recommender system. We evaluated the recommender through a user trial. Our primary measures were the degree to which users agreed with the recommendations and the degree to which user decisions changed following the interaction. We demonstrate that, interacting with the recommender system, resulted in users having a clearer understanding of the features that contribute to their decision (even if they did not always agree with the recommender system’s decision or change the decision). We argue that the design illustrates the XAI framework and supports the proposal that explanation involves a two-stage dialogue.

 


Human Factors Guidance for Robotic and Autonomous Systems (RAS)

Document Author Claire Hillyer, Hannah State-Davey, Nicole Hooker, Richard Farry, Russell Bond, James Campbell, Phillip Morgan, Dylan Jones, Juan D. Hernández Vega & Philip Butler
Abstract This paper outlines recent (2021/2022) work to produce Human Factors (HF) guidance to support the design, development, evaluation, and acquisition of Robotic and Autonomous Systems.

 


How sensemaking by people and artificial intelligence might involve different frames

Document

Author
Hebah Bubakr and Chris Baber
Abstract
Sensemaking can involve selecting an appropriate frame to explain a given set of data. The selection of the frame (and the definition of its appropriateness) can depend on the prior experience of the sensemaker as much as on the availability of data. Moreover, artificial intelligence and machine learning systems are dependent on knowledge elicited from human experts, yet, if we trained these systems to perform and think in the same way as a human, most of the tools will be unacceptable to be used as criterion because people consider many personal parameters that a machine should not use. In this paper, we consider how an artificial intelligence system that can be used to filter curriculum vitae (or résumés) might apply frames that result in socially unacceptable decisions.

 


A framework for explainable AI

Document

Author
Chris Baber, Emily McCormick & Ian Apperley
Abstract
The issue of ‘explanation’ has become prominent in automated decision aiding, particularly when those aids rely on Artificial Intelligence (AI). In this paper, we propose a formal framework of ‘explanation’ which allows us to define different types of explanation. We provide a use-cases to illustrate how explanation can differ, both in human-human and human-agent interactions. At the heart of our framework is the notion that explanation involves common ground in which two parties are able to align the features to which they attend and the type of relevance that they apply to these features. Managing alignment of features is, for the most part, relatively easy and, in human-human explanation, people might begin an explanation by itemizing the features they are using (and people typically only mention one or two features). However, providing features without an indication of Relevance is unlikely to provide a satisfactory. This implies that explanations that only present features (or Clusters of features) are incomplete. However, most Explainable AI provides output only at the level of Features or Clusters. From this, the user has to infer Relevance by making assumptions as to the beliefs that could have led to that output. But, as the reasoning applied by the human is likely to differ from that of the AI system, such inference is not guaranteed to be an accurate reflection of how the AI system reached its decision. To this end, more work is required to allow interactive explanation to be developed (so that the human is able to define and test the inferences and compare these with the AI system’s reasoning).

 


Ergonomic constraints for astronauts: challenges and opportunities today and for the future

Document

Author
Martin Braddock, Konrad Szocik & Riccardo Campa
Abstract
Manned spaceflight is ergonomically constrained by living and working in a confined space in microgravity where astronauts on both short and long duration missions are exposed to daily radiation levels well above those received on Earth. Living in microgravity, especially on long duration missions aboard the International Space Station has deleterious physiological and psychological effects on astronaut health and astronauts may on just one mission receive exposure to a cumulative radiation dose normally received in a lifetime on Earth. It is unrealistic at present to contemplate continuous missions of greater than 1 year, and to mitigate against current ergonomic constraints, space agencies have outlined roadmaps to introduce artificial gravity and develop strategies for conferring human resistance to radiation. In parallel, the concept of whole brain emulation (WBE) and ‘uploading’ of human consciousness on to a platform within the rapidly growing field of artificial intelligence is one scenario which may remove the future requirement for a physical crew. This paper highlights incidents and accidents which have resulted in astronaut injury because of ergonomics in space, considers the timing of deployment of technology roadmaps and draws together multi-disciplinary fields to project a future whereby deep space travel may be manned by an e-crew, devoid of many of the established ergonomic boundaries that apply to human astronauts.

 


The role of Human Factors and Ergonomics in AI operation and evaluation

Document

Author
Nikolaos Gkikas, Paul Salmon & Christopher Baber
Abstract
The present paper sets the scene for a recorded workshop exploring the critical role of Human Factors and Ergonomics in the development, operation and evaluation of Artificial Intelligence. First, we lay out some foundations of the multidisciplinary developments commonly placed under the umbrella of “Artificial Intelligence/AI” and propose some fundamental definitions to facilitate the structure of our arguments and the foundations for the workshop. Then we explore the role of Human Factors and Ergonomics methods in ensuring that AI systems contribute to our disciplinary goal of enhancing human health and wellbeing. In closing we propose a research agenda designed to ensure that Human Factors and Ergonomics is applied in future AI developments.

 


Using SUS for Current and Future AI

Document

Author
Richard Farry
Abstract
The System Usability Scale (SUS) was assessed for its relevance and ease of use for assessing an AI capable of human-like interaction. Participants used SUS to assess Outlook, a contemporary consumer-grade AI interaction partners (smartphone digital assistants), and human teammates as a proxy ‘system’ for future human-like AI interaction partners. The results show that participants considered SUS to be relevant and easy to use for contemporary consumer-grade AI interaction partners, but not for human teammates. However, there was no meaningful difference in their ability to apply SUS between contemporary digital assistants, human teammates, and an email client. Thus, SUS can be used effectively for all of these kinds of systems.