Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification

Lam Pham (Vortragende:r), Trang Le, Cam Le, Dat Ngo, Axel Weißenfeld, Alexander Schindler

Publikation: Beitrag in Buch oder TagungsbandBeitrag in Tagungsband mit PosterpräsentationBegutachtung


In this paper, we present a deep learning based multimodal system for classifying daily life videos. To train the system, we propose a two-phase training strategy. In the first training phase (Phase I), we extract the audio and visual (image) data from the original video. We then train the audio data and the visual data with independent deep learning based models. After the training processes, we obtain audio embeddings and visual embeddings by extracting feature maps from the pre-trained deep learning models. In the second training phase (Phase II), we train a fusion layer to combine the audio/visual embeddings and a dense layer to classify the combined embedding into target daily scenes. Our extensive experiments, which were conducted on the benchmark dataset of DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) 2021 Task 1B Development, achieved the best classification accuracy of 80.5%, 91.8%, and 95.3% with only audio data, with only visual data, both audio and visual data, respectively. The highest classification accuracy of 95.3% presents an improvement of 17.9% compared with DCASE baseline and shows very competitive to the state-of-the-art systems.
Titel20th International Conference on Content-based Multimedia Indexing
PublikationsstatusVeröffentlicht - Dez. 2023
VeranstaltungCBMI 2023: 20th International Conference on Content-based Multimedia Indexing - Orleans, France, Orleans, Frankreich
Dauer: 20 Sept. 202322 Sept. 2023


KonferenzCBMI 2023

Research Field

  • Ehemaliges Research Field - Data Science


Untersuchen Sie die Forschungsthemen von „Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren