Abstract
Counterfactual explanations are a common approach tomachine learning interpretability as they indicate the differ-ence between two predicted classes. However, automati-cally extracting sample data points to explain the decisionof time-series models is still an open research issue. There-fore, we propose a technique that can automatically identifyfactual and counterfactual examples from within the neigh-borhood of data points requiring explanation. We furtherextract human interpretable time-series features to explainmodel decisions. We apply these techniques to explain thedecisons of a univariate binary time-series classifier trainedfor predicting the health state of a turbofan engine.
| Originalsprache | Englisch |
|---|---|
| Titel | ACM/CHI Workshop on HCXAI 2022 |
| Seitenumfang | 8 |
| Publikationsstatus | Veröffentlicht - 2023 |
| Veranstaltung | ACM/CHI Worhshop on HCXAI 2022 - online Dauer: 12 Mai 2022 → 13 Mai 2022 |
Workshop
| Workshop | ACM/CHI Worhshop on HCXAI 2022 |
|---|---|
| Zeitraum | 12/05/22 → 13/05/22 |
Research Field
- Ehemaliges Research Field - Data Science