A Critical Review of Common Log Data Sets Used for Evaluation of Sequence-Based Anomaly Detection Techniques

Research output: Contribution to journalArticlepeer-review

Abstract

Log data store event execution patterns that correspond to underlying workflows of systems or applications. While most logs are informative, log data also include artifacts that indicate failures or incidents. Accordingly, log data are often used to evaluate anomaly detection techniques that aim to automatically disclose unexpected or otherwise relevant system behavior patterns. Recently, detection approaches leveraging deep learning have increasingly focused on anomalies that manifest as changes of sequential patterns within otherwise normal event traces. Several publicly available data sets, such as HDFS, BGL, Thunderbird, OpenStack, and Hadoop, have since become standards for evaluating these anomaly detection techniques, however, the appropriateness of these data sets has not been closely investigated in the past. In this paper we therefore analyze six publicly available log data sets with focus on the manifestations of anomalies and simple techniques for their detection. Our findings suggest that most anomalies are not directly related to sequential manifestations and that advanced detection techniques are not required to achieve high detection rates on these data sets.
Original languageEnglish
Article number61
Pages (from-to)1354 - 1375
JournalProceedings of the ACM on Software Engineering
Volume1
Issue numberFSE
DOIs
Publication statusPublished - 12 Jul 2024

Research Field

  • Cyber Security

Fingerprint

Dive into the research topics of 'A Critical Review of Common Log Data Sets Used for Evaluation of Sequence-Based Anomaly Detection Techniques'. Together they form a unique fingerprint.

Cite this