Document

Author
Wenhu Zhang & Chris Baber
Abstract
We asked 30 participants to ask questions of an Interactive Voice Assistant (IVA) which we had modified to provide different levels of accuracy in its answers. The levels of accuracy were low (55%) or high (80%). We also told users what level of accuracy to expect (60% or 100%). This produced a set of 6 combinations of actual accuracy with expected accuracy (including the condition when we did not tell the users which level of accuracy to expect). As expected, when users experience a more reliable IVA (i.e., 80% vs. 55%) their rating of trust is higher, and when actual an IVA with high accuracy and they are expecting accuracy to be high, then their trust rating is higher still. However, expected accuracy seems to outweigh actual accuracy, particularly when the actual performance is less than expected. Counter intuitively, this suggests that participants were not able to judge the actual accuracy of the IVA but relied on the expected accuracy.