Abstract
This article reports on a longitudinal experiment in which the influence of an assistive 1 system’s malfunctioning and transparency on trust was examined over a period of seven days. To 2 this end, we simulated the system’s personalized recommendation features to support participants 3 with the task of learning new texts and taking quizzes. Using a 2×2 mixed design, the system’s 4 malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated 5
as between-subjects variables, whereas exposure time was used as a repeated-measures variable. 6 A combined qualitative and quantitative methodological approach was used to analyze the data 7 from 171 participants. Our results show that participants perceived the system making a faulty 8 recommendation as a trust violation. Additionally, a trend emerged from both the quantitative 9 and qualitative analyses regarding how the availability of explanations (even when not accessed) 10 increased the perception of a trustworthy system.
as between-subjects variables, whereas exposure time was used as a repeated-measures variable. 6 A combined qualitative and quantitative methodological approach was used to analyze the data 7 from 171 participants. Our results show that participants perceived the system making a faulty 8 recommendation as a trust violation. Additionally, a trend emerged from both the quantitative 9 and qualitative analyses regarding how the availability of explanations (even when not accessed) 10 increased the perception of a trustworthy system.
Original language | English |
---|---|
Number of pages | 20 |
Journal | Multimodal Technologies and Interaction |
Volume | 8 |
Issue number | 3 |
DOIs | |
Publication status | Published - Jan 2024 |
Research Field
- Human-centered Automation and Assistance
Keywords
- Trust
- Explainability
- Transparency
- Assistive Systems