Abstract
The increasing complexity and frequency of cyber attacks require Network Intrusion Detection Systems (NIDS) that can adapt to evolving threats. Artificial intelligence (AI), particularly machine learning (ML), has gained increasing popularity in detecting sophisticated attacks. However, their potential lack of interpretability remains a significant barrier to their widespread adoption in practice, especially in security-sensitive areas. In response, various explainable AI (XAI) methods have been proposed to provide insights into the decision-making process. This paper investigates whether these XAI methods, including SHAP, LIME, Tree Interpreter, Saliency, Integrated Gradients, and DeepLIFT, produce similar explanations when applied to ML-NIDS. By analyzing consensus among these methods across different datasets and ML models, we explore whether an agreement exists that could simplify the practical adoption of XAI in cybersecurity, as similar explanations would eliminate the need for rigorous selection processes. Our findings reveal varying degrees of consensus among the methods, suggesting that while some align closely, others diverge significantly, highlighting the need for careful selection and combination of XAI tools to enhance trustworthiness in real-world applications.
Originalsprache | Englisch |
---|---|
Titel | 2024 20th International Conference on Network and Service Management (CNSM) |
Seiten | 1-7 |
Seitenumfang | 7 |
ISBN (elektronisch) | 978-3-903176-66-9 |
DOIs | |
Publikationsstatus | Veröffentlicht - 31 Dez. 2024 |
Veranstaltung | Workshop on Network Security Operations - Prague, Tschechische Republik Dauer: 28 Okt. 2024 → 31 Okt. 2024 https://cnsm-conf.org/2024/NeSecOr.html |
Workshop
Workshop | Workshop on Network Security Operations |
---|---|
Kurztitel | NeSecOr |
Land/Gebiet | Tschechische Republik |
Stadt | Prague |
Zeitraum | 28/10/24 → 31/10/24 |
Internetadresse |
Research Field
- Multimodal Analytics