Abstract
We propose a multi-label multi-task framework based on a convolutional recurrent neural network to unify detection of isolated and overlapping audio events. The framework leverages the power of convolutional recurrent neural network architectures; convolutional layers learn effective features over which higher recurrent layers perform sequential modelling. Furthermore, the output layer is designed to handle arbitrary degrees of event overlap. At each time step in the recurrent output sequence, an output triple is dedicated to each event category of interest to jointly model event occurrence and temporal boundaries. That is, the network jointly determines whether an event of this category occurs, and when it occurs, by estimating onset and offset positions at each recurrent time step. We then introduce three sequential losses for network training: multi-label classification loss, distance estimation loss, and confidence loss. We demonstrate good generalization on two datasets: ITC-Irst for isolated audio event detection, and TUT-SED-Synthetic-2016 for overlapping audio event detection.
Originalsprache | Englisch |
---|---|
Titel | IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. |
Seiten | 51-55 |
ISBN (elektronisch) | 978-1-4799-8131-1 |
DOIs | |
Publikationsstatus | Veröffentlicht - Mai 2019 |
Veranstaltung | 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) - Brighton, Großbritannien/Vereinigtes Königreich Dauer: 12 Mai 2019 → 17 Mai 2019 |
Konferenz
Konferenz | 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
---|---|
Land/Gebiet | Großbritannien/Vereinigtes Königreich |
Stadt | Brighton |
Zeitraum | 12/05/19 → 17/05/19 |
Research Field
- Ehemaliges Research Field - Data Science