Publications

Refine Results

(Filters Applied) Clear All

Predicting, diagnosing, and improving automatic language identification performance

Author:
Published in:
5th European Conf. on Speech Communication and Technology, EUROSPEECH, 22-25 September 1997.

Summary

Language-identification (LID) techniques that use multiple single-language phoneme recognizers followed by n-gram language models have consistently yielded top performance at NIST evaluations. In our study of such systems, we have recently cut our LID error rate by modeling the output of n-gram language models more carefully. Additionally, we are now able to produce meaningful confidence scores along with our LID hypotheses. Finally, we have developed some diagnostic measures that can predict performance of our LID algorithms.
READ LESS

Summary

Language-identification (LID) techniques that use multiple single-language phoneme recognizers followed by n-gram language models have consistently yielded top performance at NIST evaluations. In our study of such systems, we have recently cut our LID error rate by modeling the output of n-gram language models more carefully. Additionally, we are now...

READ MORE

Automatic dialect identification of extemporaneous, conversational, Latin American Spanish Speech

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 2, ICASSP, 7-10 May 1996, pp. 777-780.

Summary

A dialect identification technique is described that takes as input extemporaneous, conversational speech spoken in Latin American Spanish and produces as output a hypothesis of the dialect. The system has been trained to recognize Cuban and Peruvian dialects of Spanish, but could be extended easily to other dialects (and languages) as well. Building on our experience in automatic language identification, the dialect-ID system uses an English phone recognizer trained on the TIMIT corpus to tokenize training speech spoken in each Spanish dialect. Phonotactic language models generated from this tokenized training speech are used during testing to compute dialect likelihoods for each unknown message. This system has an error rate of 16% on the Cuban/Peruvian two-alternative forced-choice test. We introduce the new "Miami" Latin American Spanish speech corpus that is capable of supporting our research into the future.
READ LESS

Summary

A dialect identification technique is described that takes as input extemporaneous, conversational speech spoken in Latin American Spanish and produces as output a hypothesis of the dialect. The system has been trained to recognize Cuban and Peruvian dialects of Spanish, but could be extended easily to other dialects (and languages)...

READ MORE

Comparison of four approaches to automatic language identification of telephone speech

Author:
Published in:
IEEE Trans. Speech Audio Process., Vol. 4, No. 1, January 1996, pp. 31-44.

Summary

We have compared the performance of four approaches for automatic language identification of speech utterances: Gaussian mixture model (GMM) classification; single-language phone recognition followed by language-dependent, interpolated n-gram language modeling (PRLM); parallel PRLM, which uses multiple single-language phone recognizers, each trained in a different language; and language dependent parallel phone recognition (PPR). These approaches which space a wide range of training requirements and levels of recognition complexity, were evaluated with the Oregon Graduate Institute Multi-Language Telephone Speech Corpus. Systems containing phone recognizers performed better than the simpler GMM classifier. The top-performing system was parallel PRLM, which exhibited an error rate of 2% for 45-s utterances and 5% for 10-s utterances in two-language, closed-set, forced-choice classification. The error rate for 11-language, closed-set, forced-choice classification was 11% for 45-s utterances and 21% for 10-s utterances.
READ LESS

Summary

We have compared the performance of four approaches for automatic language identification of speech utterances: Gaussian mixture model (GMM) classification; single-language phone recognition followed by language-dependent, interpolated n-gram language modeling (PRLM); parallel PRLM, which uses multiple single-language phone recognizers, each trained in a different language; and language dependent parallel phone...

READ MORE

Language identification using phoneme recognition and phonotactic language modeling

Author:
Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 5, ICASSP, 9-12 May 1995, pp. 3503-3506.

Summary

A language identification technique using multiple single-language phoneme recognizers followed by n-gram language models yielded to performance at the March 1994 NIST language identification evaluation. Since the NIST evaluation, work has been aimed at further improving performance by using the acoustic likelihoods emitted from gender-dependent phoneme recognizers to weight the phonotactic likelihoods output from gender-dependent language models. We have investigated the effect of restricting processing to the most highly discriminating n-grams, and we have also added explicit duration modeling at the phonotactic level. On the OGI Multi-language Telephone Speech Corpus, accuracy on an 11-language identification task has risen to 89% on 45-s utterances and 79% on 10-s utterances. Two-language classification accuracy is 98% and 95% for the 45-s and 10-s utterance, respectively. Finally, we have started to apply these same techniques to the problem of dialect identification.
READ LESS

Summary

A language identification technique using multiple single-language phoneme recognizers followed by n-gram language models yielded to performance at the March 1994 NIST language identification evaluation. Since the NIST evaluation, work has been aimed at further improving performance by using the acoustic likelihoods emitted from gender-dependent phoneme recognizers to weight the...

READ MORE

The effects of telephone transmission degradations on speaker recognition performance

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 1, Speech, 9-12 May 1995, pp. 329-332.

Summary

The two largest factors affecting automatic speaker identification performance are the size of the population an the degradations introduced by noisy communication, channels (e.g., telephone transmission). To examine experimentally these two factors, this paper presents text-independent speaker identification results for varying speaker population sizes up to 630 speakers for both clean, wideband speech and telephone speech. A system based on Gaussian mixture speaker identification and experiments are conducted on the TIMIT and NTIMIT databases. This is believed to be the first speaker identification experiments on the complete 630 speaker TIMIT and NTIMIT databases and the largest text-independent speaker identification task reported to date. Identification accuracies of 99.5% and 60.7% are achieved on the TIMIT and NTIMIT databases, respectively. This paper also presents experiments which examine and attempt to quantify the performance loss associated with various telephone degradations by systematically degrading the TIMIT speech in a manner consistent with measured NTIMIT degradations and measuring the performance loss at each step. It is found that the standard degradations of filtering and additive noise do not account for all of the performance gap between the TIMIT and NTIMIT data. Measurements of nonlinear microphone distortions are also...
READ LESS

Summary

The two largest factors affecting automatic speaker identification performance are the size of the population an the degradations introduced by noisy communication, channels (e.g., telephone transmission). To examine experimentally these two factors, this paper presents text-independent speaker identification results for varying speaker population sizes up to 630 speakers for both...

READ MORE

Automatic language identification of telephone speech messages using phoneme recognition and N-gram modeling

Author:
Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 1, Speech Processing, 19-22 April 1994, pp. 305-308.

Summary

This paper compares the performance of four approaches to automatic language identification (LID) of telephone speech messages: Gaussian mixture model classification (GMM), language-independent phoneme recognition followed by language-dependent language modeling (PRLM), parallel PRLM (PRLM-P), and language-dependent parallel phoneme recognition (PPR). These approaches span a wide range of training requirements and levels of recognition complexity. All approaches were tested on the development test subset of the OGI multi-language telephone speech corpus. Generally, system performance was directly related to system complexity, with PRLM-P and PPR performing best. On 45 second test utterance, average two language, closed-set, forced-choice classification performance, reached 94.5% correct. The best 10 language, closed-set, forced-choice performance was 79.2% correct.
READ LESS

Summary

This paper compares the performance of four approaches to automatic language identification (LID) of telephone speech messages: Gaussian mixture model classification (GMM), language-independent phoneme recognition followed by language-dependent language modeling (PRLM), parallel PRLM (PRLM-P), and language-dependent parallel phoneme recognition (PPR). These approaches span a wide range of training requirements and...

READ MORE

Digital signal processing applications in cochlear-implant research

Published in:
Lincoln Laboratory Journal, Vol. 7, No. 1, Spring 1994, pp. 31-62.

Summary

We have developed a facility that enables scientists to investigate a wide range of sound-processing schemes for human subjects with cochlear implants. This digital signal processing (DSP) facility-named the Programmable Interactive System for Cochlear Implant Electrode Stimulation (PISCES)-was designed, built, and tested at Lincoln Laboratory and then installed at the Cochlear Implant Research Laboratory (CIRL) of the Massachusetts Eye and Ear Infirmary (MEEI). New stimulator algorithms that we designed and ran on PISCES have resulted in speech-reception improvements for implant subjects relative to commercial implant stimulators. These improvements were obtained as a result of interactive algorithm adjustment in the clinic, thus demonstrating the importance of a flexible signal-processing facility. Research has continued in the development of a laboratory-based, sohare-controlled, real-time, speech processing system; the exploration of new sound-processing algorithms for improved electrode stimulation; and the design of wearable stimulators that will allow subjects full-time use of stimulator algorithms developed and tested in a laboratory setting.
READ LESS

Summary

We have developed a facility that enables scientists to investigate a wide range of sound-processing schemes for human subjects with cochlear implants. This digital signal processing (DSP) facility-named the Programmable Interactive System for Cochlear Implant Electrode Stimulation (PISCES)-was designed, built, and tested at Lincoln Laboratory and then installed at the...

READ MORE

Automatic language identification using Gaussian mixture and hidden Markov models

Author:
Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 2, Speech Processing, ICASSP, 27-30 April 1993, pp. 399-402.

Summary

Ergodic, continuous-observation, hidden Markov models (HMMs) were used to perform automatic language classification and detection of speech messages. State observation probability densities were modeled as tied Gaussian mixtures. The algorithm was evaluated on four multilanguage speech databases: a three language subset of the Spoken Language Library, a three language subset of a five language Rome Laboratory database, the 20 language CCITT database, and the ten language OGI telephone speech database. Generally, performance of a single state HMM (i.e. a static Gaussian mixture classifier) was comparable to the multistate HMMs, indicating that the sequential modeling capabilities of HMMs were not exploited.
READ LESS

Summary

Ergodic, continuous-observation, hidden Markov models (HMMs) were used to perform automatic language classification and detection of speech messages. State observation probability densities were modeled as tied Gaussian mixtures. The algorithm was evaluated on four multilanguage speech databases: a three language subset of the Spoken Language Library, a three language subset...

READ MORE

Two-talker pitch tracking for co-channel talker interference suppression

Published in:
MIT Lincoln Laboratory Report TR-951

Summary

Almost all co-channel talker interference suppression systems use the difference in the pitches of the target and jammer speakers to suppress the jammer and enhance the target. While joint pitch estimators outputting two pitch estimates as a function of time have been proposed, the task of proper assignment of pitch to speaker (two-talker pitch tracking) has proven difficult. This report describes several approaches to the two-talker pitch tracking problem including algorithms for pitch track interpolation, spectral envelope tracking, and spectral envelope classification. When evaluated on an all-voiced two-talker database, the best of these new tracking systems correctly assigned pitch 87% of the time given perfect joint pitch estimation.
READ LESS

Summary

Almost all co-channel talker interference suppression systems use the difference in the pitches of the target and jammer speakers to suppress the jammer and enhance the target. While joint pitch estimators outputting two pitch estimates as a function of time have been proposed, the task of proper assignment of pitch...

READ MORE

Automatic talker activity labeling for co-channel talker interference suppression

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 2, Speech Processing 2; VLSI; Audio and Electroacoustics, ICASSP, 3-6 April 1990, pp. 813-816.

Summary

This paper describes a speaker activity detector taking co-channel speech as input and labeling intervals of the input as target-only, jammer-only, or two-speaker (target+jammer). The algorithms applied were borrowed primarily from speaker recognition, thereby allowing us to use speaker-dependent test-utterance-independent information in a front-end for co-channel talker interference suppression. Parameters studied included classifier choice (vector quantization vs. Gaussian), training method (unsupervised vs. supervised), test utterance segmentation (uniform vs. adaptive), and training and testing target-to-jammer ratios. Using analysis interval lengths of 100 ms, performance reached 80% correct detection.
READ LESS

Summary

This paper describes a speaker activity detector taking co-channel speech as input and labeling intervals of the input as target-only, jammer-only, or two-speaker (target+jammer). The algorithms applied were borrowed primarily from speaker recognition, thereby allowing us to use speaker-dependent test-utterance-independent information in a front-end for co-channel talker interference suppression. Parameters...

READ MORE