Παρακολούθηση
Atsunori Ogawa
Atsunori Ogawa
NTT Communication Science Laboratories
Η διεύθυνση ηλεκτρονικού ταχυδρομείου έχει επαληθευτεί στον τομέα ieee.org - Αρχική σελίδα
Τίτλος
Παρατίθεται από
Παρατίθεται από
Έτος
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices
T Yoshioka, N Ito, M Delcroix, A Ogawa, K Kinoshita, M Fujimoto, C Yu, ...
2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU …, 2015
2472015
Single channel target speaker extraction and recognition with speaker beam
M Delcroix, K Zmolikova, K Kinoshita, A Ogawa, T Nakatani
2018 IEEE international conference on acoustics, speech and signal …, 2018
1652018
Linear prediction-based dereverberation with advanced speech enhancement and recognition technologies for the REVERB challenge
M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ...
Reverb workshop, 2014
1282014
Low-latency real-time meeting recognition and understanding using distant microphones and omni-directional camera
T Hori, S Araki, T Yoshioka, M Fujimoto, S Watanabe, T Oba, A Ogawa, ...
IEEE transactions on audio, speech, and language processing 20 (2), 499-513, 2011
1052011
Speaker-aware neural network based beamformer for speaker extraction in speech mixtures.
K Zmolikova, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani
Interspeech, 2655-2659, 2017
1012017
Error detection and accuracy estimation in automatic speech recognition using deep bidirectional recurrent neural networks
A Ogawa, T Hori
Speech Communication 89, 70-83, 2017
802017
Strategies for distant speech recognitionin reverberant environments
M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ...
EURASIP Journal on Advances in Signal Processing 2015, 1-15, 2015
682015
Semi-Supervised End-to-End Speech Recognition.
S Karita, S Watanabe, T Iwata, A Ogawa, M Delcroix
Interspeech, 2-6, 2018
672018
Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions
M Delcroix, K Kinoshita, C Yu, A Ogawa, T Yoshioka, T Nakatani
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
492016
Learning speaker representation for neural network based multichannel speaker extraction
K Žmolíková, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 8-15, 2017
442017
Spatial correlation model based observation vector clustering and MVDR beamforming for meeting recognition
S Araki, M Okada, T Higuchi, A Ogawa, T Nakatani
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
412016
Semi-supervised end-to-end speech recognition using text-to-speech and autoencoders
S Karita, S Watanabe, T Iwata, M Delcroix, A Ogawa, T Nakatani
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
402019
Balancing acoustic and linguistic probabilities
A Ogawa, K Takeda, F Itakura
Proceedings of the 1998 IEEE International Conference on Acoustics, Speech …, 1998
401998
Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues.
T Ochiai, M Delcroix, K Kinoshita, A Ogawa, T Nakatani
INTERSPEECH, 2718-2722, 2019
372019
Speech recognition in the presence of highly non-stationary noise based on spatial, spectral and temporal speech/noise modeling combined with dynamic variance adaptation
M Delcroix, K Kinoshita, T Nakatani, S Araki, A Ogawa, T Hori, ...
Machine Listening in Multisource Environments, 2011
372011
Auxiliary Feature Based Adaptation of End-to-end ASR Systems.
M Delcroix, S Watanabe, A Ogawa, S Karita, T Nakatani
Interspeech 2018, 2444-2448, 2018
362018
Text-informed speech enhancement with deep neural networks
K Kinoshita, M Delcroix, A Ogawa, T Nakatani
Sixteenth Annual Conference of the International Speech Communication …, 2015
362015
ASR error detection and recognition rate estimation using deep bidirectional recurrent neural networks
A Ogawa, T Hori
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
322015
Robust i-vector extraction for neural network adaptation in noisy environment
C Yu, A Ogawa, M Delcroix, T Yoshioka, T Nakatani, JHL Hansen
International Speech and Communication Association, 2015
302015
Speech recognition in living rooms: Integrated speech enhancement and recognition system based on spatial, spectral and temporal modeling of sounds
M Delcroix, K Kinoshita, T Nakatani, S Araki, A Ogawa, T Hori, ...
Computer Speech & Language 27 (3), 851-873, 2013
272013
Δεν είναι δυνατή η εκτέλεση της ενέργειας από το σύστημα αυτή τη στιγμή. Προσπαθήστε ξανά αργότερα.
Άρθρα 1–20