Facilitating the Practical Implementation of Improved Explainability and Visual Representation for Confi dence and Uncertainty in Speaker Models
Research Team: Helen Armstrong, Matt Peterson, Rebecca Planchart, Kweku Baidoo
Problem Statement: The Laboratory for Analytic Sciences (LAS) has established that there are signifi cant challenges inherent to the calibration of trust within human-machine teams in the intelligence community (IC). The visualization of confi dence and uncertainty, embedded within a user interface and user experience, should help language analysts appropriately calibrate trust via model transparency and interpretability. Such calibration could enable an analyst to more effectively evaluate model outputs when making a decision. Analysts should be able to “traverse different layers” within a user interface to access increasingly granular explanations of output (Knack et al., 2022, p. 5). If the user interface does not provide these explanations in a useful and usable format, analysts may distrust or overtrust model outputs (Lee & See, 2004, p. 73). To support the calibration of trust between analysts and speaker models, an effective
visualization of confi dence and uncertainty must be paired with a user interface and user experience that enable progressive disclosure of layered explanations as well as a dynamic system enabling analysts to adjust risk parameters in consideration of the larger mission context.