Decoding the Unknown: Unveiling Industrial Time Series Classification with Counterfactuals

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

Deep learning models for time series prediction have become popular with the rise of IoT and sensor data availability. However, their lack of explainability hampers their use in critical industrial applications. While existing model-agnostic approaches like LIME and SHAP have been used in time series classification applications, it is worth mentioning that they may have limitations in their suitability. For example, the random sampling process used by LIME leads to unstable explanations. We propose a counterfactual explanation approach for interpretable insights into time series predictions to address this issue. We choose an industrial use case, determining machine health, and employ k-means clustering and Dynamic Time Warping (DTW) to handle the temporal dimension. DTW compares and aligns two time series by discovering the optimal path of alignment that minimizes disparities in their temporal patterns. We explain the model's decisions using local surrogate decision trees, analysing feature importance and decision cuts.
OriginalspracheEnglisch
Aufsatznummer13
Seiten (von - bis)28-29
Seitenumfang2
FachzeitschriftERCIM News - Special Thema: Explainable AI
Issue134
PublikationsstatusVeröffentlicht - 23 Juni 2023

Research Field

  • Ehemaliges Research Field - Data Science

Schlagwörter

  • Explainable AI
  • Climate Change
  • Tree Growth

Fingerprint

Untersuchen Sie die Forschungsthemen von „Decoding the Unknown: Unveiling Industrial Time Series Classification with Counterfactuals“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren