Folgen
Andong Li
Andong Li
Senior Researcher at Tencent AI Lab
Bestätigte E-Mail-Adresse bei tencent.com
Titel
Zitiert von
Zitiert von
Jahr
Two heads are better than one: A two-stage complex spectral mapping approach for monaural speech enhancement
A Li, W Liu, C Zheng, C Fan, X Li
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1829-1843, 2021
1312021
Glance and gaze: A collaborative learning framework for single-channel speech enhancement
A Li, C Zheng, L Zhang, X Li
Applied Acoustics 187, 108499, 2022
1032022
Speech enhancement using progressive learning-based convolutional recurrent neural network
A Li, M Yuan, C Zheng, X Li
Applied Acoustics 166, 107347, 2020
792020
On the importance of power compression and phase estimation in monaural speech dereverberation
A Li, C Zheng, R Peng, X Li
JASA express letters 1 (1), 2021
762021
Dual-branch attention-in-attention transformer for single-channel speech enhancement
G Yu, A Li, C Zheng, Y Guo, Y Wang, H Wang
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
682022
A Simultaneous Denoising and Dereverberation Framework with Target Decoupling
A Li, W Liu, X Luo, G Yu, C Zheng, X Li
Proc. Interspeech 2021, 2801-2805, 2021
682021
ICASSP 2021 Deep Noise Suppression Challenge: Decoupling Magnitude and Phase Optimization with a Two-Stage Deep Network
A Li, W Liu, X Luo, C Zheng, X Li
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2021
612021
Embedding and beamforming: All-neural causal beamformer for multichannel speech enhancement
A Li, W Liu, C Zheng, X Li
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
452022
A recursive network with dynamic attention for monaural speech enhancement
A Li, C Zheng, C Fan, R Peng, X Li
Proc. Interspeech, 2020--1513, 2020
352020
DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement
G Yu, A Li, H Wang, Y Wang, Y Ke, C Zheng
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2629-2644, 2022
302022
Taylor, can you hear me now? a Taylor-unfolding framework for monaural speech enhancement
A Li, S You, G Yu, C Zheng, X Li
arXiv preprint arXiv:2205.00206, 2022
142022
Deep learning-based stereophonic acoustic echo suppression without decorrelation
L Cheng, R Peng, A Li, C Zheng, X Li
The Journal of the Acoustical Society of America 150 (2), 816-829, 2021
132021
Know Your Enemy, Know Yourself: A Unified Two-Stage Framework for Speech Enhancemen
W Liu, A Li, Y Ke, C Zheng, X Li
Proc. Interspeech 2021, 186-190, 2021
132021
A time-domain monaural speech enhancement with feedback learning
A Li, C Zheng, L Cheng, R Peng, X Li
2020 Asia-Pacific Signal and Information Processing Association Annual …, 2020
132020
Sixty years of frequency-domain monaural speech enhancement: From traditional to deep learning methods
C Zheng, H Zhang, W Liu, X Luo, A Li, X Li, BCJ Moore
Trends in Hearing 27, 23312165231209913, 2023
122023
Filtering and refining: A collaborative-style framework for single-channel speech enhancement
A Li, C Zheng, G Yu, J Cai, X Li
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2156-2172, 2022
122022
Low-complexity artificial noise suppression methods for deep learning-based speech enhancement algorithms
Y Ke, A Li, C Zheng, R Peng, X Li
EURASIP Journal on Audio, Speech, and Music Processing 2021, 1-15, 2021
122021
TaylorBeamformer: Learning All-Neural Beamformer for Multi-Channel Speech Enhancement from Taylor's Approximation Theory
A Li, G Yu, C Zheng, X Li
arXiv preprint arXiv:2203.07195, 2022
112022
Long-term missing wind data recovery using free access databases and deep learning for bridge health monitoring
Z Wang, A Li, W Zhang, Y Zhang
Journal of Wind Engineering and Industrial Aerodynamics 230, 105201, 2022
92022
A neural beamspace-domain filter for real-time multi-channel speech enhancement
W Liu, A Li, X Wang, M Yuan, Y Chen, C Zheng, X Li
Symmetry 14 (6), 1081, 2022
92022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20