Meishu Song
Meishu Song
University of Tokyo, Universität Augsburg.
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
An early study on intelligent analysis of speech under COVID-19: Severity, sleep quality, fatigue, and anxiety
J Han, K Qian, M Song, Z Yang, Z Ren, S Liu, J Liu, H Zheng, W Ji, ...
arXiv preprint arXiv:2005.00096, 2020
Audiovisual analysis for recognising frustration during game-play: introducing the multimodal game frustration database
M Song, Z Yang, A Baird, E Parada-Cabaleiro, Z Zhang, Z Zhao, ...
2019 8th international conference on affective computing and intelligent …, 2019
Computer Audition for Fighting the SARS-CoV-2 Corona Crisis—Introducing the Multitask Speech Corpus for COVID-19
K Qian, M Schmitt, H Zheng, T Koike, J Han, J Liu, W Ji, J Duan, M Song, ...
IEEE Internet of Things Journal 8 (21), 16035-16046, 2021
Adventitious respiratory classification using attentive residual neural networks
Z Yang, S Liu, M Song, E Parada-Cabaleiro, BW Schuller
Frustration recognition from speech during game interaction using wide residual networks
M Song, A Mallol-Ragolta, E Parada-Cabaleiro, Z Yang, S Liu, Z Ren, ...
Virtual Reality & Intelligent Hardware 3 (1), 76-86, 2021
Coughing-based recognition of Covid-19 with spatial attentive ConvLSTM recurrent neural networks
T Yan, H Meng, E Parada-Cabaleiro, S Liu, M Song, BW Schuller
Predicting group work performance from physical handwriting features in a smart English classroom
M Song, K Qian, B Chen, K Okabayashi, E Parada-Cabaleiro, Z Yang, ...
2021 5th International Conference on Digital Signal Processing, 140-145, 2021
Dynamic Restrained Uncertainty Weighting Loss for Multitask Learning of Vocal Expression
M Song, Z Yang, A Triantafyllopoulos, X Jing, V Karas, X Jiangjian, ...
arXiv preprint arXiv:2206.11049, 2022
Redundancy Reduction Twins Network: A Training framework for Multi-output Emotion Regression
X Jing, M Song, A Triantafyllopoulos, Z Yang, BW Schuller
arXiv preprint arXiv:2206.09142, 2022
Exploring speaker enrolment for few-shot personalisation in emotional vocalisation prediction
A Triantafyllopoulos, M Song, Z Yang, X Jing, BW Schuller
arXiv preprint arXiv:2206.06680, 2022
Interaction with the soundscape: exploring emotional audio generation for improved individual wellbeing
A Baird, M Song, B Schuller
International Conference on Human-Computer Interaction, 229-242, 2020
Self-Supervised Attention Networks and Uncertainty Loss Weighting for Multi-Task Emotion Recognition on Vocal Bursts
V Karas, A Triantafyllopoulos, M Song, BW Schuller
arXiv preprint arXiv:2209.07384, 2022
COVYT: Introducing the Coronavirus YouTube and TikTok speech dataset featuring the same speakers with and without infection
A Triantafyllopoulos, A Semertzidou, M Song, FB Pokorny, BW Schuller
arXiv preprint arXiv:2206.11045, 2022
A Temporal-oriented Broadcast ResNet for COVID-19 Detection
X Jing, S Liu, E Parada-Cabaleiro, A Triantafyllopoulos, M Song, Z Yang, ...
arXiv preprint arXiv:2203.17012, 2022
An Overview & Analysis of Sequence-to-Sequence Emotional Voice Conversion
Z Yang, X Jing, A Triantafyllopoulos, M Song, I Aslan, BW Schuller
arXiv preprint arXiv:2203.15873, 2022
Supervised contrastive learning for game-play frustration detection from speech
M Song, E Parada-Cabaleiro, S Liu, M Milling, A Baird, Z Yang, ...
International Conference on Human-Computer Interaction, 617-629, 2021
Parallelising 2D-CNNs and Transformers: A Cognitive based approach for Automatic Recognition of Learners’ English Proficiency
M Song, E Parada-Cabaleiro, Z Yang, X Jing
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–17