Naoyuki Kanda
Naoyuki Kanda
Bestätigte E-Mail-Adresse bei microsoft.com
Titel
Zitiert von
Zitiert von
Jahr
Elastic spectral distortion for low resource speech recognition with deep neural networks
N Kanda, R Takeda, Y Obuchi
Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on …, 2013
782013
A two-layer model for behavior and dialogue planning in conversational service robots
M Nakano, Y Hasegawa, K Nakadai, T Nakamura, J Takeuchi, T Torii, ...
2005 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2005
652005
Multi-domain spoken dialogue system with extensibility and robustness against speech recognition errors
K Komatani, N Kanda, M Nakano, K Nakadai, H Tsujino, T Ogata, ...
Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, 9-17, 2006
502006
A multi-expert model for dialogue and behavior control of conversational robots and agents
M Nakano, Y Hasegawa, K Funakoshi, J Takeuchi, T Torii, K Nakadai, ...
Knowledge-Based Systems 24 (2), 248-256, 2011
392011
Open-vocabulary keyword detection from super-large scale speech database
N Kanda, H Sagawa, T Sumiyoshi, Y Obuchi
2008 IEEE 10th Workshop on Multimedia Signal Processing, 939-944, 2008
392008
Maximum a posteriori Based Decoding for CTC Acoustic Models
N Kanda, X Lu, H Kawai
Interspeech 2016, 1868-1872, 2016
362016
End-to-end neural speaker diarization with self-attention
Y Fujita, N Kanda, S Horiguchi, Y Xue, K Nagamatsu, S Watanabe
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2019
352019
End-to-end neural speaker diarization with permutation-free objectives
Y Fujita, N Kanda, S Horiguchi, K Nagamatsu, S Watanabe
arXiv preprint arXiv:1909.05952, 2019
352019
CHiME-6 Challenge: Tackling multispeaker speech recognition for unsegmented recordings
S Watanabe, M Mandel, J Barker, E Vincent, A Arora, X Chang, ...
arXiv preprint arXiv:2004.09249, 2020
332020
The Hitachi/JHU CHiME-5 system: Advances in speech recognition for everyday home environments using multiple microphone arrays
N Kanda, R Ikeshita, S Horiguchi, Y Fujita, K Nagamatsu, X Wang, ...
Proc. CHiME-5, 6-10, 2018
302018
Investigation of lattice-free maximum mutual information-based acoustic models with sequence-level Kullback-Leibler divergence
N Kanda, Y Fujita, K Nagamatsu
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 69-76, 2017
242017
Guided source separation meets a strong ASR backend: Hitachi/Paderborn University joint investigation for dinner party ASR
N Kanda, C Boeddeker, J Heitkaemper, Y Fujita, S Horiguchi, ...
arXiv preprint arXiv:1905.12230, 2019
212019
Lattice-free State-level Minimum Bayes Risk Training of Acoustic Models
N Kanda, Y Fujita, K Nagamatsu
Interspeech 2018, 2923-2927, 2018
202018
Contextual constraints based on dialogue models in database search task for spoken dialogue systems
K Komatani, N Kanda, T Ogata, HG Okuno
Proc. European Conf. Speech Commun. & Tech.(EUROSPEECH), 877-880, 2005
202005
Acoustic modeling for distant multi-talker speech recognition with single-and multi-channel branches
N Kanda, Y Fujita, S Horiguchi, R Ikeshita, K Nagamatsu, S Watanabe
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
152019
多段リスコアリングに基づく大規模音声中の任意検索語検出
神田直之, 住吉貴志, 小窪浩明, 佐川浩彦, 大淵康成
電子情報通信学会論文誌 D 95 (4), 969-981, 2012
142012
Face-voice matching using cross-modal embeddings
S Horiguchi, N Kanda, K Nagamatsu
Proceedings of the 26th ACM international conference on Multimedia, 1011-1019, 2018
132018
Maximum-a-Posteriori-Based Decoding for End-to-End Acoustic Models
N Kanda, X Lu, H Kawai
IEEE/ACM Transactions on Audio, Speech, and Language Processing 25 (5), 1023 …, 2017
112017
Minimum Bayes risk training of CTC acoustic models in maximum a posteriori based decoding framework
N Kanda, X Lu, H Kawai
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
102017
マルチドメイン音声対話システムにおける対話履歴を利用したドメイン選択
神田直之, 駒谷和範, 中野幹生, 中臺一博, 辻野広司, 尾形哲也, 奥乃博
情報処理学会論文誌 48 (5), 1980-1989, 2007
102007
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20