2021
31.
Jonathan Dodge; Roli Khanna; Jed Irvine; Kin-ho Lam; Theresa Mai; Zhengxian Lin; Nicholas Kiddle; Evan Newman; Andrew Anderson; Sai Raja; Caleb Matthews; Christopher Perdriau; Margaret Burnett; Alan Fern
After-Action Review for AI (AAR/AI) Journal Article
In: ACM Transactions on Interactive Intelligent Systems, vol. 11, no. 3-4, pp. 29:1–29:35, 2021, ISSN: 2160-6455.
Abstract | Links | BibTeX | Tags: AI, Human-Computer Interaction
@article{dodge_after-action_2021,
title = {After-Action Review for AI (AAR/AI)},
author = { Jonathan Dodge and Roli Khanna and Jed Irvine and Kin-ho Lam and Theresa Mai and Zhengxian Lin and Nicholas Kiddle and Evan Newman and Andrew Anderson and Sai Raja and Caleb Matthews and Christopher Perdriau and Margaret Burnett and Alan Fern},
url = {https://doi.org/10.1145/3453173},
doi = {10.1145/3453173},
issn = {2160-6455},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {ACM Transactions on Interactive Intelligent Systems},
volume = {11},
number = {3-4},
pages = {29:1--29:35},
abstract = {Explainable AI is growing in importance as AI pervades modern society, but few have studied how explainable AI can directly support people trying to assess an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways\textemdashleading to the possibility of wide variations in assessment of the same agent due only to variations in their processes. AAR, or After-Action Review, is a method some military organizations use to assess human agents, and it has been validated in many domains. Drawing upon this strategy, we derived an After-Action Review for AI (AAR/AI), to organize ways people assess reinforcement learning agents in a sequential decision-making environment. We then investigated what AAR/AI brought to human assessors in two qualitative studies. The first investigated AAR/AI to gather formative information, and the second built upon the results, and also varied the type of explanation (model-free vs. model-based) used in the AAR/AI process. Among the results were the following: (1) participants reporting that AAR/AI helped to organize their thoughts and think logically about the agent, (2) AAR/AI encouraged participants to reason about the agent from a wide range of perspectives, and (3) participants were able to leverage AAR/AI with the model-based explanations to falsify the agent’s predictions.},
keywords = {AI, Human-Computer Interaction},
pubstate = {published},
tppubtype = {article}
}
Explainable AI is growing in importance as AI pervades modern society, but few have studied how explainable AI can directly support people trying to assess an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways—leading to the possibility of wide variations in assessment of the same agent due only to variations in their processes. AAR, or After-Action Review, is a method some military organizations use to assess human agents, and it has been validated in many domains. Drawing upon this strategy, we derived an After-Action Review for AI (AAR/AI), to organize ways people assess reinforcement learning agents in a sequential decision-making environment. We then investigated what AAR/AI brought to human assessors in two qualitative studies. The first investigated AAR/AI to gather formative information, and the second built upon the results, and also varied the type of explanation (model-free vs. model-based) used in the AAR/AI process. Among the results were the following: (1) participants reporting that AAR/AI helped to organize their thoughts and think logically about the agent, (2) AAR/AI encouraged participants to reason about the agent from a wide range of perspectives, and (3) participants were able to leverage AAR/AI with the model-based explanations to falsify the agent’s predictions.