350 rub
Journal Information-measuring and Control Systems №7 for 2010 г.
Article in number:
Automatic recognition of audio-visual russian speech by asynchronous model
Keywords:
speech recognition
audio-visual speech
Hidden Markov Models
modalities asynchrony
multi-modal interface
Authors:
A. A. Karpov
Abstract:
In real exploitation conditions, which are characterized by a low quality of speech signals, presence of external noises or background conversations, the automatic speech recognition systems are not able to provide an acceptable recognition performance even using various methods for signal filtration and noise suppression. In order to improve accuracy and robustness of automatic recognition of speech, various approaches to analysis of visual speech based on technologies of computer vision (so called «lip-reading») are studied, creating bimodal models for audio-visual speech recognition.
In this paper, we present study of a model for automatic bimodal recognition of audio-visual Russian speech using the mathematical apparatus of Coupled Hidden Markov Models of the first order, which allows performing the fusion of the feature vector streams from auditory and visual speech modalities on the level of states of the united stochastic model. This model allows to take into account possible time mismatch (asynchrony) between corresponding speech units (phonemes and visemes) peculiar to conversational speech and to make information fusion from both speech modalities considering weight coefficients of their informativity. One of the main problems at development of the audio-visual speech recognizer is to realize an effective way for fusion and synchronization of speech modalities. The necessity of asynchronous fusion of modalities is forced by non-stationary time discrepancies between acoustical and visual speech, which are caused by limitations of speech production dynamics and by the effects of co-articulation (influence and overlapping of neighboring speech units in flow) that influence differently on audio and video components. In the given research, several bimodal and unimodal speaker-dependent Russian speech recognizers with a small vocabulary were implemented. They were tested by using previously collected corpus of audio-visual Russian speech (it contains continuously pronounced connected digits with the phrase length from 3 till 6 words) of 6 native Russian speakers. Recognition experiments were made with adding acoustical noises and varying Signal-to-Noise Ratio (SNR).
The paper presents the accuracy results of continuous Russian speech recognition for a small-sized recognition vocabulary as well as the comparison of bimodal recognition with unimodal models.
The experimental results demonstrate that the bimodal speech recognition outperforms the unimodal pure-audio recognition especially for low values of SNR < 15 dB. At that the asynchronous modality fusion recognizer based on Coupled Hidden Markov Models (CHMM) works slightly better than the synchronous modality fusion recognizer based on Multi-Stream Hidden Markov Models (MSHMM). Moreover, acoustical information about speech becomes less informative in experiments with low values of SNR < 5 dB, and in that conditions a highest accuracy is achieved by the unimodal pure-video speech recognizer.
Pages: 91-96
References
- Karpov, A., Tsirulnik, L., Krnoul, Z., Ronzhin, A., Lobanov, B., Zelezny, M., Audio-Visual Speech Asynchrony Modeling in a Talking Head // In Proc. 10-th International Conference Interspeech-2009. Brighton. UK. 2009. P. 2911-2914.
- Карпов А., РонжинA., Лобанов Б., Цирульник Л., Железны М. Разработка бимодальной системы аудиовизуального распознавания русской речи // Информационно-измерительные и управляющие системы. 2008. Т. 6. № 10. С. 58-62.
- Nefian, A., Liang, L., Pi, X., Xiaoxiang, X., Mao, C.,and Murphy, K., A coupled hmm for audio-visual speech recognition. Proc. International Conference ICASSP-2002. Orlando. USA. 2002.
- Chu, S., Huang, T.,Multi-Modal sensory Fusion with Application to Audio-Visual Speech Recognition // Proc. Multi-modal Speech Recognition Workshop-2002. Greensboro. USA. 2002.
- Ронжин А. Л., Карпов А. А., Ли И. В. Речевой и многомодальный интерфейсы. М.: Наука(Информатика: неограниченныевозможностиивозможныеограничения). 2006.
- Lienhart, R., Maydt, J.,An Extended Set of Haar-like Features for Rapid Object Detection // Proc. IEEE International Conference on Image Processing ICIP-2002. USA. 2002. P. 900-903.
- Liang, L., Liu, X., Zhao, Y., Pi, X., Nefian, A., Speaker independent audio-visual continuous speech recognition. Proc. International Conference on Multimedia and Expo ICME-2002, Lausanne, Switzerland. 2002.