Decoding the Unknown: Unveiling Industrial Time Series Classification with Counterfactuals

Research output: Contribution to journalArticlepeer-review

Abstract

eep learning models for time series prediction have become popular with the rise of IoT and sensor data availability. However, their lack of explainability hampers their use in critical industrial applications. While existing model-agnostic approaches like LIME and SHAP have been used in time series classification applications, it is worth mentioning that they may have limitations in their suitability. For example, the random sampling process used by LIME leads to unstable explanations. We propose a counterfactual explanation approach for interpretable insights into time series predictions to address this issue. We choose an
industrial use case, determining machine health, and
employ k-means clustering and Dynamic Time Warping
(DTW) to handle the temporal dimension. DTW compares and aligns two time series by discovering the optimal path of alignment that minimizes disparities in their temporal patterns.
We explain the model’s decisions using local surrogate decision trees, analysing feature importance and decision cuts.
Original languageEnglish
Article number13
Pages (from-to)28-29
Number of pages2
JournalERCIM News - Special Thema: Explainable AI
Issue number134
Publication statusPublished - 23 Jun 2023

Research Field

  • Former Research Field - Data Science

Fingerprint

Dive into the research topics of 'Decoding the Unknown: Unveiling Industrial Time Series Classification with Counterfactuals'. Together they form a unique fingerprint.

Cite this