Publications

Refine Results

(Filters Applied) Clear All

Spoken language systems

Summary

Spoken language is the most natural and common form of human-human communication, whether face to face, over the telephone, or through various communication media such as radio and television. In contrast, human-machine interaction is currently achieved largely through keyboard strokes, pointing, or other mechanical means, using highly stylized languages. Communication, whether human-human or human-machine, suffers greatly when the two communicating agents do not "speak" the same language. The ultimate goal of work on spoken language systems is to overcome this language barrier by building systems that provide the necessary interpretive function between various languages, thus establishing spoken language as a versatile and natural communication medium between humans and machines and among humans speaking different languages.
READ LESS

Summary

Spoken language is the most natural and common form of human-human communication, whether face to face, over the telephone, or through various communication media such as radio and television. In contrast, human-machine interaction is currently achieved largely through keyboard strokes, pointing, or other mechanical means, using highly stylized languages. Communication...

READ MORE

A system for acoustic-phonetic analysis of continuous speech

Published in:
Proc. IEEE Symp. on Speech Recognition, 15-19 April 1974, pp. 54-67.

Summary

A system for acoustic-phonetic analysis of continuous speech is being developed to serve as part of an automatic speech understanding system. The acoustic system accepts the speech waveform as an input and produces as output a string of phoneme-like units referred to as acoustic phonetic elements (APEL'S). This paper should be considered as a progress report, since the system is still under active development. The initial phase of the acoustic analysis consists of signal processing and parameter extraction, and includes spectrum analysis via linear prediction, computation of a number of parameters of the spectrum, and fundamental frequency extraction. This is followed by a preliminary segmentation of the speech into a few broad acoustic categories and formant tracking during vowel-like segments. The next phase consists of more detailed segmentation and classification intended to meet the needs of subsequent linguistic analysis. The preliminary segmentation and segment classification yield the following categories: vowel-like sound; volume dip within vowel-like sound; fricative-like sound; stop consonants, including silence or voice bar, and associated burst. These categories are produced by a deviation tree based upon energy measurements in selected frequency bands, derivatives and ratios of these measurements, a voicing detector, and a few editing rules. The more detailed classification algorithms include: 1) detection and identification of some diphthongs, semivowels, and nasals, through analysis of formant motions, positions, and amplitudes; 2) a vowel identifier, which determines three ranked choices for each vowel based on a comparison of the formant positions in the detected vowel segment to stored formant positions in a speaker-normalized vowel table; 3) a fricative identifier, which employs measurement of relative spectral energies in several bands to group the fricative segments into phoneme-like categories; 4) stop consonant classification based on the properties of the plosive burst. The above algorithms have been tested on a substantial corpus of continuous speech data. Performance results, as well as detailed descriptions of the algorithms are given.
READ LESS

Summary

A system for acoustic-phonetic analysis of continuous speech is being developed to serve as part of an automatic speech understanding system. The acoustic system accepts the speech waveform as an input and produces as output a string of phoneme-like units referred to as acoustic phonetic elements (APEL'S). This paper should...

READ MORE

Showing Results

1-2 of 2