Reimagining explainability in expert contexts: setting foundations for effective expert-AI interactions through design and collaboration
Item Status
Embargo End Date
Date
Authors
Simkute, Auste
Abstract
Domain experts increasingly rely on Artificial Intelligence (AI) systems to support their decision-making. However, they often struggle to build meaningful trust in opaque and complex technologies or find them poorly fitting within their workflows. As a result, experts either over-rely on or refuse AI systems instead of benefiting from them. Explainable AI (XAI) has been seen as one of the solutions to support experts’ interactions with AI, but despite extensive research efforts, XAI techniques have not been effective in practice. This thesis argues that expert-orientated explainability should stem from an ongoing collaboration and mutual understanding between experts and stakeholders involved in its development. Explainability should also empower long-lasting learning and experts’ ability to use their expertise through design informed by empirical knowledge from Human Factors and Cognitive Psychology research.
I first review XAI development in recent years and how its focus shifted from developing predominantly technical to more user-centred solutions, from targeting narrow groups of technical users to a broad range of stakeholders with varying levels of technical backgrounds. Then, based on an ethnographically informed study with experts and software developers, I outline foundations for explainability set through collaboration and feedback. Then, I draw on the Human Factors research literature review and discuss how this knowledge could be used to build systems that would empower experts. Informed by the reviewed literature and ideation workshops with interface designers, I present a conceptual design framework to align explainability interface features with expert decision-making strategies in varying risk and time-pressure contexts. Based on Cognitive Psychology and Human Factors research literature, I introduce learning and cognitive engagement strategies for explainability that would motivate experts and foster expertise development. Finally, I connect all three parts, showing how all these aspects are necessary for explainability to be effective in expert contexts.
This item appears in the following Collection(s)

