Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System

Setareh Zafari, Jesse de Pagter, Guglielmo Papagni, Alischa Rosenstein, Michael Filzmoser, Sabine T. Koeszegi

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

This article reports on a longitudinal experiment in which the influence of an assistive 1 system’s malfunctioning and transparency on trust was examined over a period of seven days. To 2 this end, we simulated the system’s personalized recommendation features to support participants 3 with the task of learning new texts and taking quizzes. Using a 2×2 mixed design, the system’s 4 malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated 5
as between-subjects variables, whereas exposure time was used as a repeated-measures variable. 6 A combined qualitative and quantitative methodological approach was used to analyze the data 7 from 171 participants. Our results show that participants perceived the system making a faulty 8 recommendation as a trust violation. Additionally, a trend emerged from both the quantitative 9 and qualitative analyses regarding how the availability of explanations (even when not accessed) 10 increased the perception of a trustworthy system.
OriginalspracheEnglisch
Seitenumfang20
FachzeitschriftMultimodal Technologies and Interaction
Volume8
Issue3
DOIs
PublikationsstatusVeröffentlicht - Jan. 2024

Research Field

  • Former Research Field - Human-centered Automation and Assistance

Fingerprint

Untersuchen Sie die Forschungsthemen von „Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren