Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification

Lam Pham (Speaker), Trang Le, Cam Le, Dat Ngo, Axel Weißenfeld, Alexander Schindler

Research output: Chapter in Book or Conference ProceedingsConference Proceedings with Poster Presentationpeer-review


In this paper, we present a deep learning based multimodal system for classifying daily life videos. To train the system, we propose a two-phase training strategy. In the first training phase (Phase I), we extract the audio and visual (image) data from the original video. We then train the audio data and the visual data with independent deep learning based models. After the training processes, we obtain audio embeddings and visual embeddings by extracting feature maps from the pre-trained deep learning models. In the second training phase (Phase II), we train a fusion layer to combine the audio/visual embeddings and a dense layer to classify the combined embedding into target daily scenes. Our extensive experiments, which were conducted on the benchmark dataset of DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) 2021 Task 1B Development, achieved the best classification accuracy of 80.5%, 91.8%, and 95.3% with only audio data, with only visual data, both audio and visual data, respectively. The highest classification accuracy of 95.3% presents an improvement of 17.9% compared with DCASE baseline and shows very competitive to the state-of-the-art systems.
Original languageEnglish
Title of host publication20th International Conference on Content-based Multimedia Indexing
Number of pages5
Publication statusPublished - Dec 2023
EventCBMI 2023: 20th International Conference on Content-based Multimedia Indexing - Orleans, France, Orleans, France
Duration: 20 Sept 202322 Sept 2023


ConferenceCBMI 2023

Research Field

  • Former Research Field - Data Science


Dive into the research topics of 'Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification'. Together they form a unique fingerprint.

Cite this