Document

thumbnail of Function Allocation for Responsible Artificial Intelligence How do we allocate trust and responsibility

Author
Patrick Waterson1, Chris Baber2, Edmund Hunt3, Sanja Milivojevic4, Sally Maynard1 & Mirco Musolesi5
Abstract
We consider how guidelines for Responsible Artificial Intelligence (RAI) need to be adapted to address the challenges of Function Allocation (FA) in human-agent teams. We offer an approach that takes a system description, using CWA, to identify where responsibility for consequences of actions might lie across the system. We propose that, in addition to allocation of functions, analysis of the system needs to identify decision points (where agents have a choice of action to perform) and responsibility points (where agents identify the consequences of their decisions). We illustrate this with example experiments. We put forward a set of open challenges and questions facing researchers in the areas of RAI and FA. We point to the need for greater emphasis on the issue of responsibility, trust and accountability in new forms of automation. We also provide pointers for the future and how these might be addressed in the coming years.