Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Article

Download BibTeX

Title

Speech enhancement using U-nets with wide-context units

Authors

[ 1 ] Instytut Automatyki i Robotyki, Wydział Automatyki, Robotyki i Elektrotechniki, Politechnika Poznańska | [ P ] employee

Scientific discipline (Law 2.0)

[2.2] Automation, electronics, electrical engineering and space technology

Year of publication

2022

Published in

Multimedia Tools and Applications

Journal year: 2022 | Journal volume: vol. 81 | Journal number: iss. 13

Article type

scientific article

Publication language

english

Keywords
EN
  • speech enhancement
  • U-nets
  • DNN
Abstract

EN In this article a new neural network for speech enhancement is proposed where single-channel noisy speech is processed in order to improve its intelligibility and quality. It is based on the U-net architecture, i.e. it is composed of two main blocks: encoder and decoder. Some of the corresponding layers in the encoder and decoder are connected with skip connections. In most of the encoder-decoder neural networks for speech enhancement known from the literature, the time-frequency resolution of the hidden feature maps is reduced. The main strategy in the presented approach is to maintain the time-frequency resolution of feature maps at all levels of the network while having large receptive field at the same time. In order to obtain features dependent on wide context we propose neural network units based on recurrent cells or dilated convolutions. The proposed neural network was evaluated using WSJ0 and TIMIT speech data mixed with noises from Noisex, DCASE and field recordings from Freesound online database. The results showed improvement over the baseline networks based on gated dilated convolutions or long-short term memory (LSTM) in terms of scale-independent speech-to-distortion ratio (SI-SDR), spectro-temporal objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) measures.

Date of online publication

09.03.2022

Pages (from - to)

18617 - 18639

DOI

10.1007/s11042-022-12632-6

URL

https://link.springer.com/article/10.1007/s11042-022-12632-6

Ministry points / journal

70

Impact Factor

3,6

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.