Document

Author
Paul M Salmon, Brandon King, Gemma J. M Read, Jason Thompson, Tony Carden, Chris Baber, Neville A Stanton & Scott McLean
Abstract
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop controls. In this article we describe the use of a many model systems Human Factors and Ergonomics (HFE) approach in which three methods were applied to identify risks in a future ‘envisioned world’ AGI-based uncrewed combat aerial vehicle (UCAV) system. The findings demonstrate that there are many potential risks, but that the most critical arise not due to poor performance, but when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it.