Improving GANs for Speech Enhancement

Huy Phan, Ian V. McLoughlin, Lam Pham, Oliver Y. Chen, Philipp Koch, Maarten De Vos, Alfred Mertins

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

Generative adversarial networks (GAN) have recently been shown to be efficient for speech enhancement. However, most, if not all, existing speech enhancement GANs (SEGAN) make use of a single generator to perform one-stage enhancement mapping. In this work, we propose to use multiple generators that are chained to perform multi-stage enhancement mapping, which gradually refines the noisy input signals in a stage-wise fashion. Furthermore, we study two scenarios: (1) the generators share their parameters and (2) the generators' parameters are independent. The former constrains the generators to learn a common mapping that is iteratively applied at all enhancement stages and results in a small model footprint. On the contrary, the latter allows the generators to flexibly learn different enhancement mappings at different stages of the network at the cost of an increased model size. We demonstrate that the proposed multi-stage enhancement approach outperforms the one-stage SEGAN baseline, where the independent generators lead to more favorable results than the tied generators. The source code is available at http://github.com/pquochuy/idsegan.
OriginalspracheEnglisch
Seiten (von - bis)1700-1704
Seitenumfang5
FachzeitschriftIEEE Signal Processing Letters
Volume27
DOIs
PublikationsstatusVeröffentlicht - 2020

Research Field

  • Data Science

Fingerprint

Untersuchen Sie die Forschungsthemen von „Improving GANs for Speech Enhancement“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren