Publications

Refine Results

(Filters Applied) Clear All

Shunting networks for multi-band AM-FM decomposition

Published in:
Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 17-20 October 1999.

Summary

We describe a transduction-based, neurodynamic approach to estimating the amplitude-modulated (AM) and frequency-modulated (FM) components of a signal. We show that the transduction approach can be realized as a bank of constant-Q bandpass filters followed by envelope detectors and shunting neural networks, and the resulting dynamical system is capable of robust AM-FM estimation. Our model is consistent with recent psychophysical experiments that indicate AM and FM components of acoustic signals may be transformed into a common neural code in the brain stem via FM-to-AM transduction. The shunting network for AM-FM decomposition is followed by a contrast enhancement shunting network that provides a mechanism for robustly selecting auditory filter channels as the FM of an input stimulus sweeps across the multiple filters. The AM-FM output of the shunting networks may provide a robust feature representation and is being considered for applications in signal recognition and multi-component decomposition problems.
READ LESS

Summary

We describe a transduction-based, neurodynamic approach to estimating the amplitude-modulated (AM) and frequency-modulated (FM) components of a signal. We show that the transduction approach can be realized as a bank of constant-Q bandpass filters followed by envelope detectors and shunting neural networks, and the resulting dynamical system is capable of...

READ MORE

A study of computation speed-ups of the GMM-UBM speaker recognition system

Published in:
6th European Conf. on Speech Communication and Technology, EUROSPEECH, 5-9 September 1999.

Summary

The Gaussian Mixture Model Universal Background Model (GMM-UBM) speaker recognition system has demonstrated very high performance in several NIST evaluations. Such evaluations, however, are concerned only with classification accuracy. In many applications, system effectiveness must be evaluated in light of both accuracy and execution speed. We present here a number of techniques for decreasing computation. Using data from the Switchboard telephone speech corpus, we show that significant speed-ups can be obtained while sacrificing surprisingly little accuracy. We expect that these techniques, involving lowering model order as well as processing fewer speech frames, will apply equally well to other recognition systems.
READ LESS

Summary

The Gaussian Mixture Model Universal Background Model (GMM-UBM) speaker recognition system has demonstrated very high performance in several NIST evaluations. Such evaluations, however, are concerned only with classification accuracy. In many applications, system effectiveness must be evaluated in light of both accuracy and execution speed. We present here a number...

READ MORE

Evaluation of confidence measures for language identification

Published in:
6th European Conf. on Speech Communication and Technology, EUROSPEECH, 5-9 September 1999.

Summary

In this paper we examine various ways to derive confidence measures for a language identification system, using phone recognition followed by language models, and describe the application of an evaluation metric for measuring the "goodness" of the different confidence measures. Experiments are conducted on the 1996 NIST Language Identification Evaluation corpus (derived from the Callfriend corpus of conversational telephone speech). The system is trained on the NIST 96 development data and evaluated on the NIST 96 evaluation data. Results indicate that we are able to predict the performance of a system and quantitatively evaluate how well the prediction holds on new data.
READ LESS

Summary

In this paper we examine various ways to derive confidence measures for a language identification system, using phone recognition followed by language models, and describe the application of an evaluation metric for measuring the "goodness" of the different confidence measures. Experiments are conducted on the 1996 NIST Language Identification Evaluation...

READ MORE

Speaker and language recognition using speech codec parameters

Summary

In this paper, we investigate the effect of speech coding on speaker and language recognition tasks. Three coders were selected to cover a wide range of quality and bit rates: GSM at 12.2 kb/s, G.729 at 8 kb/s, and G.723.1 at 5.3 kb/s. Our objective is to measure recognition performance from either the synthesized speech or directly from the coder parameters themselves. We show that using speech synthesized from the three codecs, GMM-based speaker verification and phone-based language recognition performance generally degrades with coder bit rate, i.e., from GSM to G.729 to G.723.1, relative to an uncoded baseline. In addition, speaker verification for all codecs shows a performance decrease as the degree of mismatch between training and testing conditions increases, while language recognition exhibited no decrease in performance. We also present initial results in determining the relative importance of codec system components in their direct use for recognition tasks. For the G.729 codec, it is shown that removal of the post- filter in the decoder helps speaker verification performance under the mismatched condition. On the other hand, with use of G.729 LSF-based mel-cepstra, performance decreases under all conditions, indicating the need for a residual contribution to the feature representation.
READ LESS

Summary

In this paper, we investigate the effect of speech coding on speaker and language recognition tasks. Three coders were selected to cover a wide range of quality and bit rates: GSM at 12.2 kb/s, G.729 at 8 kb/s, and G.723.1 at 5.3 kb/s. Our objective is to measure recognition performance...

READ MORE

Modeling of the glottal flow derivative waveform with application to speaker identification

Published in:
IEEE Trans. Speech Audio Process., Vol. 7, No. 5, September 1999, pp. 569-586.

Summary

An automatic technique for estimating and modeling the glottal flow derivative source waveform from speech, and applying the model parameters to speaker identification, is presented. The estimate of the glottal flow derivative is decomposed into coarse structure, representing the general flow shape, and fine structure, comprising aspiration and other perturbations in the flow, from which model parameters are obtained. The glottal flow derivative is estimated using an inverse filter determined within a time interval of vocal-fold closure that is identified through differences in formant frequency modulation during the open and closed phases of the glottal cycle. This formant motion is predicted by Ananthapadmanabha and Fant to be a result of time-varying and nonlinear source/vocal tract coupling within a glottal cycle. The glottal flow derivative estimate is modeled using the Liljencrants-Fant model to capture its coarse structure, while the fine structure of the flow derivative is represented through energy and perturbation measures. The model parameters are used in a Gaussian mixture model speaker identification (SID) system. Both coarse- and fine-structure glottal features are shown to contain significant speaker-dependent information. For a large TIMIT database subset, averaging over male and female SID scores, the coarse-structure parameters achieve about 60% accuracy, the fine-structure parameters give about 40% accuracy, and their combination yields about 70% correct identification. Finally, in preliminary experiments on the counterpart telephone-degraded NTIMIT database, about a 5% error reduction in SID scores is obtained when source features are combined with traditional mel-cepstral measures.
READ LESS

Summary

An automatic technique for estimating and modeling the glottal flow derivative source waveform from speech, and applying the model parameters to speaker identification, is presented. The estimate of the glottal flow derivative is decomposed into coarse structure, representing the general flow shape, and fine structure, comprising aspiration and other perturbations...

READ MORE

Understanding-based translingual information retrieval

Published in:
4th Int. Conf. on Applications of Natural Language to Information Systems, 17-19 June 1999, pp. 187-195.

Summary

This paper describes our preliminary research on an understanding-based translingual information retrieval system for which the input to the system is a query sentence in English, and the output of the system is a set of documents either in English or in Korean. The understanding module produces a meaning representation --- called semantic frame --- of the input sentence where the predicate-argument structure and the question-type of the input are identified, and each keyword is assigned its concept category. The translingual search module performs search on an English and Korean bilingual corpus tagged with concept categories. The results of our preliminary experiment, performed an a document set consisting of slides and notes from English and Korean briefings in a military domain, indicate that an understanding-based approach to information retrieval combined with concept-based search technique improves both precision and recall compared with a keyword match technique without understanding for both monolingual- and translingual retrieval. Current work is directed at further development of the system, and in preparation for tests on larger copora.
READ LESS

Summary

This paper describes our preliminary research on an understanding-based translingual information retrieval system for which the input to the system is a query sentence in English, and the output of the system is a set of documents either in English or in Korean. The understanding module produces a meaning representation...

READ MORE

Security implications of adaptive multimedia distribution

Published in:
Proc. IEEE Int. Conf. on Communications, Multimedia and Wireless, Vol. 3, 6-10 June 1999, pp. 1563-1567.

Summary

We discuss the security implications of different techniques used in adaptive audio and video distribution. Several sources of variability in the network make it necessary for applications to adapt. Ideally, each receiver should receive media quality commensurate with the capacity of the path leading to it from each sender. Several different techniques have been proposed to provide such adaptation. We discuss the implications of each technique for confidentiality, authentication, integrity, and anonymity. By coincidence, the techniques with better performance also have better security properties.
READ LESS

Summary

We discuss the security implications of different techniques used in adaptive audio and video distribution. Several sources of variability in the network make it necessary for applications to adapt. Ideally, each receiver should receive media quality commensurate with the capacity of the path leading to it from each sender. Several...

READ MORE

Automatic speaker clustering from multi-speaker utterances

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. II, 15-19 March 1999, pp. 817-820.

Summary

Blind clustering of multi-person utterances by speaker is complicated by the fact that each utterance has at least two talkers. In the case of a two-person conversation, one can simply split each conversation into its respective speaker halves, but this introduces error which ultimately hurts clustering. We propose a clustering algorithm which is capable of associating each conversation with two clusters (and therefore two-speakers) obviating the need for splitting. Results are given for two speaker conversations culled from the Switchboard corpus, and comparisons are made to results obtained on single-speaker utterances. We conclude that although the approach is promising, our technique for computing inter-conversation similarities prior to clustering needs improvement.
READ LESS

Summary

Blind clustering of multi-person utterances by speaker is complicated by the fact that each utterance has at least two talkers. In the case of a two-person conversation, one can simply split each conversation into its respective speaker halves, but this introduces error which ultimately hurts clustering. We propose a clustering...

READ MORE

Corpora for the evaluation of speaker recognition systems

Published in:
ICASSP 1999, Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 15-19 March 1999.

Summary

Using standard speech corpora for development and evaluation has proven to be very valuable in promoting progress in speech and speaker recognition research. In this paper, we present an overview of current publicly available corpora intended for speaker recognition research and evaluation. We outline the corpora's salient features with respect to their suitability for conducting speaker recognition experiments and evaluations. Links to these corpora, and to new corpora, will appear on the web http://www.apl.jhu.edu/Classes/Notes/Campbell/SpkrRec/. We hope to increase the awareness and use of these standard corpora and corresponding evaluation procedures throughout the speaker recognition community.
READ LESS

Summary

Using standard speech corpora for development and evaluation has proven to be very valuable in promoting progress in speech and speaker recognition research. In this paper, we present an overview of current publicly available corpora intended for speaker recognition research and evaluation. We outline the corpora's salient features with respect...

READ MORE

Implications of glottal source for speaker and dialect identification

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. II, 15-19 March 1999, pp. 813-816.

Summary

In this paper we explore the importance of speaker specific information carried in the glottal source. We time align utterances of two speakers speaking the same sentence from the TIMIT database of American English. We then extract the glottal flow derivative from each speaker and interchange them. Through time alignment and this glottal flow transformation, we can make a speaker of a northern dialect sound more like his southern counterpart. We also time align the utterances of two speakers of Spanish dialects speaking the same sentence and then perform the glottal waveform transformation. Through these processes a Peruvian speaker is made to sound more Cuban-like. From these experiments we conclude that significant speaker and dialect specific information, such as noise, breathiness or aspiration, and vocalization, is carried in the glottal signal.
READ LESS

Summary

In this paper we explore the importance of speaker specific information carried in the glottal source. We time align utterances of two speakers speaking the same sentence from the TIMIT database of American English. We then extract the glottal flow derivative from each speaker and interchange them. Through time alignment...

READ MORE