Explaining Binary Time-Series Classification with Counterfactuals in an Industrial Use Case

Publikation: Beitrag in Buch oder TagungsbandVortrag mit Beitrag in TagungsbandBegutachtung

Abstract

Counterfactual explanations are a common approach tomachine learning interpretability as they indicate the differ-ence between two predicted classes. However, automati-cally extracting sample data points to explain the decisionof time-series models is still an open research issue. There-fore, we propose a technique that can automatically identifyfactual and counterfactual examples from within the neigh-borhood of data points requiring explanation. We furtherextract human interpretable time-series features to explainmodel decisions. We apply these techniques to explain thedecisons of a univariate binary time-series classifier trainedfor predicting the health state of a turbofan engine.
OriginalspracheEnglisch
TitelACM/CHI Workshop on HCXAI 2022
Seitenumfang8
PublikationsstatusVeröffentlicht - 2023
VeranstaltungACM/CHI Worhshop on HCXAI 2022 - online
Dauer: 12 Mai 202213 Mai 2022

Workshop

WorkshopACM/CHI Worhshop on HCXAI 2022
Zeitraum12/05/2213/05/22

Research Field

  • Ehemaliges Research Field - Data Science

Fingerprint

Untersuchen Sie die Forschungsthemen von „Explaining Binary Time-Series Classification with Counterfactuals in an Industrial Use Case“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren