Team Approach Application
UC Berkeley
Salience-map attention mechanisms implemented in DNNs Petsiuk 2021 Vasu et al. 2021 Saliency maps for object detectors allow users to identify the detector which will be more accurate by reviewing sample detections & maps Petsiuk 2021
Transduction of DNN states into natural language explanations Hendricks et al. 2021 Explainable and advisable autonomous driving systems to fill in knowledge gaps. Humans can evaluate AI-generated explanations for navigation decisions. Kim et al. 2021 Watkins et al. 2021
Charles River Analytics
Causal models of deep reinforcement learning policies to enable explanation-enhanced training by answering counterfactual queries Druce et al. 2021 Witty et al. 2021
Human-machine teaming gameplay in StarCraft2
Developed a distilled version of a pedestrian detection model, which used convolutional auto encoders to condense the activations into user understandable “chunks”.
Carnegie Mellon University
Robustified classifiers with salient gradients Yeh and Ravikumar 2021 Interactive debugger interface for visualizing poisoned training datasets. Work is applied on the IARPA TrojAI dataset. Sun et al. 2021
Oregon State University
iGOS++ visual saliency algorithm Khorram et al. 2021
Debugging of COVID-19 diagnosis chest x-ray classifier
Quantized bottleneck networks for deep RL algorithms
Understanding recurrent policy networks through extracted state machines and key decision points in video games and control Danesh et al. 2021
Explanation analysis process for reinforcement learning systems Dodge et al. 2021 After-action review of AI decisions mirror the army’s after action review system to understand why AI made its decisions to improve explainability and AI trust Mai et al. 2020
Reinforcement learning model via embedded self-predictions
Contrastive explanations of action choices in terms of human understandable properties of future outcomes Lin et al., 2021
Rutgers University
Bayesian teaching to select examples and features from the training data to explain model inferences to a domain expert Yang et al. 2021 Interactive tool for analyzing a pneumothorax detector for chest x-rays. Targeted user study engaging ~10 radiologists demonstrated the effectiveness of the explanations. Folke et al. 2021
Team Approach Application
UT Dallas
Tractable probabilistic logic models where local explanations are queries over probabilistic models and global explanations are generated using logic, probability and directed trees and graphs
Activity recognition in videos using TACoS cooking tasks and WetLab scientific lab procedure datasets. Generates explanations about whether activities are present in the video data. Chiradeep et al. 2021
PARC
Reinforcement learning implementing a hierarchical multifactor framework for decision problems.
Simulated drone flight mission planning task where users learned to predict each agent’s behavior to choose the best flight plan. User study tested the usefulness of AI-generated local and global explanations in helping users predict AI behavior. Stefik et al. 2021
SRI
Spatial attention VQA (SVQA) and spatial-object attention BERT VQA (SOBERT) Ray et al. 2021 Alipour et al. 2021
Attention-based (gradCAM) explanations for MRI brain-tumor segmentation. Visual salience models for video Q&A.
Raytheon BBN
- CNN based one-shot detector, using network dissection to identify the most salient features Bau et al. 2018 - Explanations produced by heatmaps and text explanations Selvaraju et al. 2017 - Human-machine common ground modeling -Indoor navigation with a robot (in collaboration with GA Tech) - Video Q&A - Human-assisted one-shot classification system by identifying the most salient features Ferguson et al. 2021
Texas A&M
Mimic learning methodology to detect falsified text. Yuan et al. 2021 Linder et al. 2021
News claim truth classification
UCLA
CX-ToM Framework: A new XAI framework using Theory-of-Mind where we pose explanation as an iterative communication process, i.e. dialog, between the machine and human user. In addition, we replace the standard attention based explanations with novel counterfactual explanations called fault-lines. Akula et al. 2021 Akula et al. 2020
Image Classification, Human body pose estimation.
A learning framework to acquire interpretable knowledge representation and an Augmented Reality system for explanation interface. Edmonds et al. 2019 Liu et al. 2021
Robot learning to open medicine bottles with locks and allows user interventions to correct wrong behaviors.
Theory of mind explanation network with multi-level belief updates from learning. Edmonds et al. 2021 (in preparation)
Minesweeper-like game to find optimal path for an agent.
IHMC Explanation Scorecard Evaluate the utility of an explanation. Defines seven levels of capability, from the null case of no explanation, to surface features (e.g. heat maps), to AI introspections such as choice logic, to diagnoses of the reasons for failures.
Cognitive Tutorial
A straightforward way to help users understand complex systems is to provide a tutorial up front but the tutorial should not be restricted to how the system works. Hoffman and Clancey 2021
Stakeholder Playbook Survey of stakeholder needs, including development team leaders, trainers, system developers and user team leaders in industry and government.
AI Evaluation Guidebook Identifies methodological shortcomings for evaluating XAI techniques, spanning experimental design, control conditions, experimental tasks and procedures, and statistical methodologies.