Publications

Refine Results

(Filters Applied) Clear All

A block diagram compiler for a digital signal processing MIMD computer

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 4, 6-9 April 1987, pp. 1867-1870.

Summary

A Block Diagram Compiler (BOC) has been designed and implemented for converting graphic block diagram descriptions of signal processing tasks into source code to be executed on a Multiple Instruction Stream - Multiple Data Stream (MIMD) array computer. The compiler takes as input a block diagram of a real-time DSP application, entered on a graphics CAE workstation, and translates it into efficient real-time assembly language code for the target multiprocessor array. The current implementation produces code for a rectangular grid of Texas Instruments TMS32010 signal processors built at Lincoln Laboratory, but the concept could be extended to other processors or other geometries in the same way that a good assembly language programmer would write it. This report begins by examining the current implementation of the BOC including relevant aspects of the target hardware. Next, we describe the task-assignment module, which uses a simulated annealing algorithm to assign the processing tasks of the DSP application to individual processors in the array. Finally, our experiences with the current version of the BOC software and hardware are reported.
READ LESS

Summary

A Block Diagram Compiler (BOC) has been designed and implemented for converting graphic block diagram descriptions of signal processing tasks into source code to be executed on a Multiple Instruction Stream - Multiple Data Stream (MIMD) array computer. The compiler takes as input a block diagram of a real-time DSP...

READ MORE

Mixed-phase deconvolution of speech based on a sine-wave model

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 2, 6-9 April 1987, pp. 649-652.

Summary

This paper describes a new method of deconvolving the vocal cord excitation and vocal tract system response. The technique relies on a sine-wave representation of the speech waveform and forms the basis of an analysis-synthesis method which yields synthetic speech essentially indistinguishable from the original. Unlike an earlier sinusoidal analysis-synthesis technique that used a minimum-phase system estimate, the approach in this paper generates a "mixed-phase" system estimate and thus an improved decomposition of excitation and system components. Since a mixed-phase system estimate is removed from the speech waveform, the resulting excitation residual is less dispersed than the previous sinusoidal-based excitation estimate of the more commonly used linear prediction residual. A method of time-varying linear filtering is given as an alternative to sinusoidal reconstruction, similar to conventional time-domain synthesis used in certain vocoders, but without the requirement of pitch and voicing decisions. Finally, speech modification with a mixed-phase system estimate is shown to be capable of more closely preserving waveform shape in time-scale and pitch transformations than the earlier approach.
READ LESS

Summary

This paper describes a new method of deconvolving the vocal cord excitation and vocal tract system response. The technique relies on a sine-wave representation of the speech waveform and forms the basis of an analysis-synthesis method which yields synthetic speech essentially indistinguishable from the original. Unlike an earlier sinusoidal analysis-synthesis...

READ MORE

Multi-style training for robust isolated-word speech recognition

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 2, 6-9 April 1987, pp. 705-708.

Summary

A new training procedure called multi-style training has been developed to improve performance when a recognizer is used under stress or in high noise but cannot be trained in these conditions. Instead of speaking normally during training, talkers use different, easily produced, talking styles. This technique was tested using a speech data base that included stress speech produced during a workload task and when intense noise was presented through earphones. A continuous-distribution talker-dependent Hidden Markov Model (HMM) recognizer was trained both normally (5 normally spoken tones) and with multi-style training (one token each from normal, fast, clear, loud, and question-pitch talking styles). The average error rate under stress and normal conditions fell by more than a factor of two with multi-style training and the average error rate under conditions sampled during training fell by a factor of four.
READ LESS

Summary

A new training procedure called multi-style training has been developed to improve performance when a recognizer is used under stress or in high noise but cannot be trained in these conditions. Instead of speaking normally during training, talkers use different, easily produced, talking styles. This technique was tested using a...

READ MORE

Two-stage discriminant analysis for improved isolated-word recognition

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 2, 6-9 April 1987, pp. 709-712.

Summary

This paper describes a two-stage isolated word search recognition system that uses a Hidden Markov Model (HMM) recognizer in the first stage and a discriminant analysis system in the second stage. During recognition, when the first-stage recognizer is unable to clearly differentiate between acoustically similar words such as "go" and "no" the second-stage discriminator is used. The second-stage system focuses on those parts of the unknown token which are most effective at discriminating the confused words. The system was tested on a 35 word, 10,710 token stress speech isolated word data base created at Lincoln Laboratory. Adding the second-stage discriminating system produced the best results to date on this data base, reducing the overall error rate by more than a factor of two.
READ LESS

Summary

This paper describes a two-stage isolated word search recognition system that uses a Hidden Markov Model (HMM) recognizer in the first stage and a discriminant analysis system in the second stage. During recognition, when the first-stage recognizer is unable to clearly differentiate between acoustically similar words such as "go" and...

READ MORE

An introduction to computing with neural nets

Published in:
IEEE ASSP Mag., Vol. 4, No. 2, April 1987, pp. 4-22.

Summary

Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.
READ LESS

Summary

Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes...

READ MORE

Speech transformations based on a sinusoidal representation

Published in:
IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-34, No. 6, December 1986, pp. 1449-1464.

Summary

In this paper a new speech analysis/synthesis technique is presented which provides the basis for a general class of speech transformations including time-scale modification, frequency scaling, and pitch modification. These modifications can be performed with a time-varying change, permitting continuous adjustment of a speaker's fundamental frequency rate of articulation. The method is based on a sinusoidal representation of the speech production mechanism which has been shown to produce synthetic speech that preserves the waveform shape and is perceptually indistinguishable from the original. Although the analysis/synthesis system was originally designed for single speaker signals, it is also capable ot recovering and modifying non-speech signals such as music, multiple speakers, marine biologic sounds, and speakers in the presence of interferences such as noise and musical backgrounds.
READ LESS

Summary

In this paper a new speech analysis/synthesis technique is presented which provides the basis for a general class of speech transformations including time-scale modification, frequency scaling, and pitch modification. These modifications can be performed with a time-varying change, permitting continuous adjustment of a speaker's fundamental frequency rate of articulation. The...

READ MORE

Speech analysis/synthesis based on a sinusoidal representation

Published in:
IEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-34, No. 4, August 1986, pp. 744-754.

Summary

A sinusoidal model for the speech waveform is used to develop a new analysis/synthesis technique that is characterized by the amplitudes, frequencies, and phases of the component sine waves. These parameters are estimated from the short-time Fourier transform using a simple peak-picking algorithm. Rapid changes in the highly resolved spectral components are tracked using the concept of "birth" and "death" of the underlying sine waves. For a given frequency track a cubic function is used to unwrap and interpolate the phase such that the phase track is maximally smooth. This phase function is applied to a sine-wave generator, which is amplitude modulated and added to the other sine waves to give the final speech output. The resulting synthetic waveform preserves the general waveform shape and is essentially perceptually indistinguishable from the original speech. Furthermore, in the presence of noise the perceptual characteristics of the speech as well as the noise are maintained. In addition, it was found that the representation was sufficiently general that high-quality reproduction was obtained for a larger class of inputs including: two overpallping, superposed speech waveforms; music waveforms; speech in musical backgrounds; and certain marine biologic sounds. Finally, the analysis/synthesis system forms the basis for new approaches to the problems of speech transformations including time-scale and pitch-scale modification, and midrate speech coding.
READ LESS

Summary

A sinusoidal model for the speech waveform is used to develop a new analysis/synthesis technique that is characterized by the amplitudes, frequencies, and phases of the component sine waves. These parameters are estimated from the short-time Fourier transform using a simple peak-picking algorithm. Rapid changes in the highly resolved spectral...

READ MORE

Robust HMM-based techniques for recognition of speech produced under stress and in noise

Published in:
Proc. Speech Tech '86, 28-30 April 1986, pp. 241-249.

Summary

Substantial improvements in speech recognition performance on speech produced under stress and in noise have been achieved through the development of techniques for enhancing the robustness of a base-line isolated-word Hidden Markov Model recognizer. The baseline HMM is a continuous-observation system using mel-frequency cepstra as the observation parameters. Enhancement techniques which were developed and tested include: placing a lower limit on the estimated variances of the observations; addition of temporal difference parameters; improved duration modelling; use of fixed diagonal covariance distance functions, with variances adjusted according to perceptual considerations; cepstral domain stress compensation; and multi-style training, where the system is trained on speech spoken with a variety of talking styles. With perceptually-motivated covariance and a combination of normal (single-frame) and differential cepstral observations, average error rates over five simulated-stress conditions were reduced from 20% (baseline) to 2.5% on a simulated-stress data base (105-word vocabulary, eight talkers, five conditions). With variance limiting, normal plus differential observations, and multi-style training, an error rate of 1.8% was achieved. Additional tests were conducted on a data base including nine talkers, eight talking styles, with speech produced under two levels of motor-workload stress. Substantial reductions in error rate were demonstrated for the noise and workload conditions, when multiple talking styles, rather than only normal speech, were used in training. In experiments conducted in simulated fighter cockpit noise, it was shown that error rates could be reduced significantly by training under multiple noise exposure conditions.
READ LESS

Summary

Substantial improvements in speech recognition performance on speech produced under stress and in noise have been achieved through the development of techniques for enhancing the robustness of a base-line isolated-word Hidden Markov Model recognizer. The baseline HMM is a continuous-observation system using mel-frequency cepstra as the observation parameters. Enhancement techniques...

READ MORE

A new application of adaptive noise cancellation

Published in:
IEEE Trans. Acoust., Speech, Sig Process., Vol. ASSP-34, No. 1, February 1986, pp. 21-7.

Summary

A new application of Widrow's adaptive noise cancellation (ANC) is presented in this paper. Specifically, the method is applied to the case where an acoustic barrier exists between the primary and reference microphones. By updating the coefficients of the noise estimation filter only during silence, it is shown that ANC can provide substantial noise reduction with little speech distortion even when the acoustic barrier provides only moderate attenuation of acoustic signals. The use of the modified ANC method is evaluated using an oxygen facemask worn by fighter aircraft pilots. Experiments demonstrate that if a noise field is created using a single source, 11 dB signal-to-noise ratio improvements can be achieved by attaching a reference microphone to the exterior of the facemask. The length of the ANC filter required for this particular environment is only 50 points.
READ LESS

Summary

A new application of Widrow's adaptive noise cancellation (ANC) is presented in this paper. Specifically, the method is applied to the case where an acoustic barrier exists between the primary and reference microphones. By updating the coefficients of the noise estimation filter only during silence, it is shown that ANC...

READ MORE

Adaptive noise cancellation in a fighter cockpit environment

Published in:
ICASSP'84, IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 19-21 March 1984.

Summary

In this paper we discuss some preliminary results on using Widrow's Adaptive Noise Cancelling (ANC) algorithm to reduce the background noise present in a fighter pilot's speech. With a dominant noise source present and with the pilot wearing an oxygen facemask, we demonstrate that good (>10 dB) cancellation of the additive noise and little speech distortion can be achieved by having the reference microphone attached to the outside of the facemask and by updating the filter coefficients only during silence intervals.
READ LESS

Summary

In this paper we discuss some preliminary results on using Widrow's Adaptive Noise Cancelling (ANC) algorithm to reduce the background noise present in a fighter pilot's speech. With a dominant noise source present and with the pilot wearing an oxygen facemask, we demonstrate that good (>10 dB) cancellation of the...

READ MORE