Document

thumbnail of Human Factors & AI In Nuclear Regulatory Consensus As the Trust Bucket

Author
Gorby Jandu
Abstract
In 2024, both the inaugural UK AI awards and Nobel Peace Prize for Physics in AI took place. This further intensified the mainstream publicity given to AI both in human automation and machine self-learning. In contrast, the uptake of AI in more industrial and specialist settings has been noticeably measured, especially away from simple automation. For example, in the nuclear industry, Human and Organisational Factors (HOF) have yet to benefit from AI en masse. This is at great odds with the potential that AI presents in mitigating nuclear harm, itself the main concern when scaling nuclear power plants. However, change is afoot. In the last two years, important international nuclear regulatory guidelines for adopting AI in nuclear HOF have been published. This is a major step change for what is a decidedly risk-averse industry as it encourages the development of AI. But the message remains clear. AI must engender ‘top-down’ trust in the technology’s deterministic predictability. Leading up to the published guidance, two world-leading ‘sandbox’ trials were conducted under regulatory auspices. Below, I discuss one in detail to show that nuclear can provide HOF practitioners with much-needed empirical data on the potential of AI. As, if nuclear can utilise AI then a vibrant case exists for other fields.