Kolloquium - Details zum Vortrag

Sie werden über Vorträge rechtzeitig per E-Mail informiert werden, wenn Sie den Newsletter des kommunikationstechnischen Kolloquiums abonnieren.

Alle Interessierten sind herzlich eingeladen, eine Anmeldung ist nicht erforderlich.

Master-Vortrag: Investigation of Specialized Recurrent Units for Acoustic Echo Cancellation

Alexander Sobolew
Montag, 25. April 2022

15:00 Uhr
virtueller Konferenzraum

In today's communication, hands-free devices e.g. remote communication are widely used. Without further action, these would suffer from an acoustic echo that arises from the coupling between speaker and microphone. To minimize these disturbances, acoustic echo cancellation is indispensable. Model-based adaptive algorithms exist to solve this issue. However, they require careful tuning of parameters whose optimum differs between devices and acoustic situations.

In this thesis, a new data-driven approach for acoustic echo cancellation is developed and investigated. In contrast to the purely model-based approach, the algorithm is supposed to learn the optimal performance from data without the necessity to be tuned. In new situations, the unknown parameters should be estimated. At its core, the novel structure is similar to a frequency adaptive filter. However, it is extended by the gating mechanism known from recurrent neural networks. The development also includes the determination of optimal training paradigms. When choosing the model structure, attention is paid to a reasonable training complexity. Major challenges in this thesis include the investigation of the gating mechanism, which is represented by a learn gate and a reset gate. The prior is used to estimate a time-varying step size of the iterative algorithm. Gated Recurrent Units provide an internal memory to accommodate the sequential information in speech, while skip connections optimize the gradient flow during training. Independent use of a reset gate to reset the impulse response estimation in case of situation change is outperformed by exploitation of weight sharing. Using weight sharing, the learn and reset gates have direct information about each other's behavior due to a shared partial network. It was shown that when using backpropagation through time, the truncation order can be reduced to a certain extent, which reduced the training complexity but did not decrease the performance. The developed model outperforms the tuned Fast Block Normalized Least-Mean-Square algorithm in reconvergence speed and steady-state performance in far-end single talk and double talk. Furthermore, our model repeatedly outperforms the tuned diagonalized Kalman filter in certain scenarios and offers significantly improved overall performance in single talk.

zurück