Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System

Setareh Zafari, Jesse de Pagter, Guglielmo Papagni, Alischa Rosenstein, Michael Filzmoser, Sabine T. Koeszegi

Research output: Contribution to journalArticlepeer-review

Abstract

This article reports on a longitudinal experiment in which the influence of an assistive 1 system’s malfunctioning and transparency on trust was examined over a period of seven days. To 2 this end, we simulated the system’s personalized recommendation features to support participants 3 with the task of learning new texts and taking quizzes. Using a 2×2 mixed design, the system’s 4 malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated 5
as between-subjects variables, whereas exposure time was used as a repeated-measures variable. 6 A combined qualitative and quantitative methodological approach was used to analyze the data 7 from 171 participants. Our results show that participants perceived the system making a faulty 8 recommendation as a trust violation. Additionally, a trend emerged from both the quantitative 9 and qualitative analyses regarding how the availability of explanations (even when not accessed) 10 increased the perception of a trustworthy system.
Original languageEnglish
Number of pages20
JournalMultimodal Technologies and Interaction
Volume8
Issue number3
DOIs
Publication statusPublished - Jan 2024

Research Field

  • Human-centered Automation and Assistance

Keywords

  • Trust
  • Explainability
  • Transparency
  • Assistive Systems

Fingerprint

Dive into the research topics of 'Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System'. Together they form a unique fingerprint.

Cite this