Publications

Refine Results

(Filters Applied) Clear All

Evaluation of Boeing 747-400 performance during ATC-directed breakouts on final approach

Published in:
MIT Lincoln Laboratory Report ATC-263

Summary

The effects of three different levels of pilot training on the breakout response of pilots and the Boeing 747-400 aircraft were studied. The study examined response during ATC-directed breakouts on final approach and was conducted in three phases. Phase 1 tested performance during manual and autopilot-coupled approaches given current procedures and pilot training. Phase 2 tested the effect of increased pilot situational awareness and proposed ATC breakout phraseology on breakouts during manual and autopilot-coupled approaches. Phase 3 tested the effect of two B747-400-specific breakout procedures on breakouts during autopilot-coupled approaches. Pilot preferences regarding procedures and the tested training materials were also solicited.
READ LESS

Summary

The effects of three different levels of pilot training on the breakout response of pilots and the Boeing 747-400 aircraft were studied. The study examined response during ATC-directed breakouts on final approach and was conducted in three phases. Phase 1 tested performance during manual and autopilot-coupled approaches given current procedures...

READ MORE

Audio signal processing based on sinusoidal analysis/synthesis

Published in:
Chapter 9 in Applications of Digital Signal Processing to Audio and Acoustics, 1998, pp. 343-416.

Summary

Based on a sinusoidal model, an analysis/synthesis technique is developed that characterizes audio signals, such as speech and music, in terms of the amplitudes, frequencies, and phases of the component sine waves. These parameters are estimated by applying a peak-picking algorithm to the short-time Fourier transform of the input waveform. Rapid changes in the highly resolved spectral components are tracked by using a frequency-matching algorithm and the concept of "birth" and "death" of the underlying sine waves. For a given frequency track, a cubic phase function is applied to the sine-wave generator, whose output is amplitude-modulated and added to sines for other frequency tracks. The resulting synthesized signal preserves the general wave form shape and is nearly perceptually indistinguishable from the original, thus providing the basis for a variety of applications including signal modification, sound splicing, morphing and extrapolation, and estimation of sound characteristics such as vibrato. Although this sine-wave analysis/synthesis is applicable to arbitrary signals, tailoring the system to a specific sound class can improve performance. A source/filter phase model is introduced within the sine-wave representation to improve signal modification, as in time-scale and pitch change and dynamic range compression, by attaining phase coherence where sinewave phase relations are preserved or controlled. A similar method of achieving phase coherence is also applied in revisiting the classical phase vocoder to improve modification of certain signal classes. A second refinement of the sine-wave analysis/synthesis invokes an additive deterministic/stochastic representation of sounds consisting of simultaneous harmonic and aharmonic contributions. A method of frequency tracking is given for the separation of these components, and is used in a number of applications. The sinewave model is also extended to two additively combined signals for the separation of simultaneous talkers or music duets. Finally, the use of sine-wave analysis/synthesis in providing insight for FM synthesis is described, and remaining challenges, such as an improved sine-wave representation of rapid attacks and other transient events, are presented.
READ LESS

Summary

Based on a sinusoidal model, an analysis/synthesis technique is developed that characterizes audio signals, such as speech and music, in terms of the amplitudes, frequencies, and phases of the component sine waves. These parameters are estimated by applying a peak-picking algorithm to the short-time Fourier transform of the input waveform...

READ MORE

The Lincoln Near-Earth Asteroid Research (LINEAR) Program

Published in:
Lincoln Laboratory Journal, Vol. 11, No. 1, 1998, pp. 27-40.

Summary

Lincoln Laboratory has been developing electro-optical space-surveillance technology to detect, characterize, and catalog satellites for more than forty years. Recent advances in highly sensitive, large-format charge-coupled devices (CCDs) allow this technology to be applied to detecting and cataloging asteroids, including near-Earth objects (NEOs). When equipped with a new Lincoln Laboratory focal-plane camera and signal processing technology, the 1-m U.S. Air Force ground-based electro-optical deep-space surveillance (GEODSS) telescopes can conduct sensitive large-coverage searches for Earth-crossing and main-belt asteroids. Field measurements indicate that these enhanced telescopes can achieve a limiting magnitude of 22 over a 2-deg2 field of view with less than 100 sec of integration. This sensitivity rivals that of much larger telescopes equipped with commercial cameras. Working two years under U.S. Air Force sponsorship, we have developed technology for asteroid search operations at the Lincoln Laboratory Experimental Test Site near Socorro, New Mexico. By using a new large-format 2560 X 1960-pixel frame-transfer CCD camera, we have discovered over 10,000 asteroids, including 53 NEOs and 4 comets as designated by the Minor Planet Center (MPC). In March 1998, the Lincoln Near-Earth Asteroid Research (LINEAR) program provided over 150,000 observations of asteroids--nearly 90% of the world's asteroid observations that month--to the MPC, which resulted in the discovery of 13 NEOs and 1 comet. The MPC indicates that the LINEAR program outperforms all asteroid search programs operated to date.
READ LESS

Summary

Lincoln Laboratory has been developing electro-optical space-surveillance technology to detect, characterize, and catalog satellites for more than forty years. Recent advances in highly sensitive, large-format charge-coupled devices (CCDs) allow this technology to be applied to detecting and cataloging asteroids, including near-Earth objects (NEOs). When equipped with a new Lincoln Laboratory...

READ MORE

The effects of compression-induced distortion of graphical weather images on pilot perception, acceptance, and performance

Published in:
MIT Lincoln Laboratory Report ATC-243

Summary

The Graphical Weather Service (GWS) is a data link application that will provide near-real-time graphical weather information to pilots in flight. To assess the effect GWS, as well as to aid in the proper design, implementation and certification of the use of GWS in aircraft, two human factors studies have been conducted. The second study conducted (Phase Two) is the topic of this report. Phase Two was conducted to determine the maximum level of compression-induced distortion that would be acceptable for transmission of weather images to the cockpit. To make this determination the following data were collected and analyzed: pilot subjective ratings of the perceived amount of distortion of a compressed image, pilot subjective ratings of the acceptability of a compressed image for use in the flight task, and pilot route selections as a function of the amount of compression presented in an image. Results indicated that images of low to moderate compression levels were generally acceptable for transmission to the cockpit, while images that were highly compressed were generally unacceptable. In addition, computed measures of image quality have been identified to enable the establishment of a criteria for transmitting images to aircraft.
READ LESS

Summary

The Graphical Weather Service (GWS) is a data link application that will provide near-real-time graphical weather information to pilots in flight. To assess the effect GWS, as well as to aid in the proper design, implementation and certification of the use of GWS in aircraft, two human factors studies have...

READ MORE

High-performance low-complexity wordspotting using neural networks

Published in:
IEEE Trans. Signal Process., Vol. 45, No. 11, November 1997, pp. 2864-2870.

Summary

A high-performance low-complexity neural network wordspotter was developed using radial basis function (RBF) neural networks in a hidden Markov model (HMM) framework. Two new complementary approaches substantially improve performance on the talker independent Switchboard corpus. Figure of Merit (FOM) training adapts wordspotter parameters to directly improve the FOM performance metric, and voice transformations generate additional training examples by warping the spectra of training data to mimic across-talker vocal tract variability.
READ LESS

Summary

A high-performance low-complexity neural network wordspotter was developed using radial basis function (RBF) neural networks in a hidden Markov model (HMM) framework. Two new complementary approaches substantially improve performance on the talker independent Switchboard corpus. Figure of Merit (FOM) training adapts wordspotter parameters to directly improve the FOM performance metric...

READ MORE

The Weather-Huffman method of data compression of weather images

Published in:
MIT Lincoln Laboratory Report ATC-261

Summary

Providing an accurate picture of the weather conditions in the pilot's area of interest is a highly useful application for ground-to-air datalinks. The problem with using data links to transmit weather graphics is the large number of bits required to exactly specify the weather image. To make transmission of weather images practical, a means must be found to compress the data to a size compatible with a limited datalink capacity. The Weather-Huffman (WH) Algorithm developed in this report incorporates several subalgorithms in order to encode as faithfully as possible an input weather image within a specified datalink bit limitation. The main algorithm component is the encoding of a version of the input image via the Weather Huffman runlength code, a variant of the standard Huffman code tailored to the peculiarities of weather images. If possible, the input map itself is encoded. Generally, however, a resolution-reduced version of the map must be created prior to the encoding to meet the bit limitation. In that case, the output map will contain blocky regions, and higher weather level areas will tend to bloom in size. Two routines are included in WH to overcome these problems. The first is a Smoother Process, which corrects the blocky edges of weather regions. The second, more powerful routine, is the Extra Bit Algorithm (EBA). EBA utilizes all bits remaining in the message after the Huffman encoding to correct pixels set at too high a weather level. Both size and shape of weather regions are adjusted by this algorithim. Pictorial examples of the operation of this algorithm on several severe weather images derived from NEXRAD are presented.
READ LESS

Summary

Providing an accurate picture of the weather conditions in the pilot's area of interest is a highly useful application for ground-to-air datalinks. The problem with using data links to transmit weather graphics is the large number of bits required to exactly specify the weather image. To make transmission of weather...

READ MORE

Noise reduction based on spectral change

Published in:
Proc. of the 1997 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, Session 8: Noise Reduction, 19-22 October 1997, 4 pages.

Summary

A noise reduction algorithm is designed for the aural enhancement of short-duration wideband signals. The signal of interest contains components possibly only a few milliseconds in duration and corrupted by nonstationary noise background. The essence of the enhancement technique is a Weiner filter that uses a desired signal spectrum whose estimation adapts to the "degree of stationarity" of the measured signal. The degree of stationarity is derived from a short-time spectral derivative measurement, motivated by sensitivity of biological systems to spectral change. Adaptive filter design tradeoffs are described, reflecting the accuracy of signal attack, background fidelity, and perceptual quality of the desired signal. Residual representations for binaural presentation are also considered.
READ LESS

Summary

A noise reduction algorithm is designed for the aural enhancement of short-duration wideband signals. The signal of interest contains components possibly only a few milliseconds in duration and corrupted by nonstationary noise background. The essence of the enhancement technique is a Weiner filter that uses a desired signal spectrum whose...

READ MORE

Soft-x-ray CCD imagers for AXAF

Published in:
IEEE Trans. Electron Devices, Vol. 44, No. 10, October 1997, pp. 1633-1642.

Summary

We describe the key features and performance data of a 1024 x 1026-pixel frame-transfer imager for use as a soft-x-ray detector on the NASA X-ray observatory Advanced X-ray Astrophysics Facility (AXAF). The four-port device features a floating-diffusion output circuit with a responsivity of 20/spl mu/V/e/sup -/ and noise of about 2 e/sup -/ at a 100-kHz data rate. Techniques for achieving the low sense-node capacitance of 5 fF are described. The CCD is fabricated on high-resistivity p-type silicon for deep depletion and includes narrow potential troughs for transfer inefficiencies of around 10/sup -7/ (ten to the negative 7). To achieve good sensitivity at energies below 1 keV, we have developed a back-illumination process that features low recombination losses at the back surface and has produced efficiencies of about 0.7 at 277 eV (carbon K/spl alpha/).
READ LESS

Summary

We describe the key features and performance data of a 1024 x 1026-pixel frame-transfer imager for use as a soft-x-ray detector on the NASA X-ray observatory Advanced X-ray Astrophysics Facility (AXAF). The four-port device features a floating-diffusion output circuit with a responsivity of 20/spl mu/V/e/sup -/ and noise of about...

READ MORE

Comparison of background normalization methods for text-independent speaker verification

Published in:
5th European Conf. on Speech Communication and Technology, EUROSPEECH, 22-25 September 1997.

Summary

This paper compares two approaches to background model representation for a text-independent speaker verification task using Gaussian mixture models. We compare speaker-dependent background speaker sets to the use of a universal, speaker-independent background model (UBM). For the UBM, we describe how Bayesian adaptation can be used to derive claimant speaker models, providing a structure leading to significant computational savings during recognition. Experiments are conducted on the 1996 NIST Speaker Recognition Evaluation corpus and it is clearly shown that a system using a UBM and Bayesian adaptation of claimant models produces superior performance compared to speaker-dependent background sets or the UBM with independent claimant models. In addition, the creation and use of a telephone handset-type detector and a procedure called hnorm is also described which shows further, large improvements in verification performance, especially under the difficult mismatched handset conditions. This is believed to be the first use of applying a handset-type detector and explicit handset-type normalization for the speaker verification task.
READ LESS

Summary

This paper compares two approaches to background model representation for a text-independent speaker verification task using Gaussian mixture models. We compare speaker-dependent background speaker sets to the use of a universal, speaker-independent background model (UBM). For the UBM, we describe how Bayesian adaptation can be used to derive claimant speaker...

READ MORE

Predicting, diagnosing, and improving automatic language identification performance

Author:
Published in:
5th European Conf. on Speech Communication and Technology, EUROSPEECH, 22-25 September 1997.

Summary

Language-identification (LID) techniques that use multiple single-language phoneme recognizers followed by n-gram language models have consistently yielded top performance at NIST evaluations. In our study of such systems, we have recently cut our LID error rate by modeling the output of n-gram language models more carefully. Additionally, we are now able to produce meaningful confidence scores along with our LID hypotheses. Finally, we have developed some diagnostic measures that can predict performance of our LID algorithms.
READ LESS

Summary

Language-identification (LID) techniques that use multiple single-language phoneme recognizers followed by n-gram language models have consistently yielded top performance at NIST evaluations. In our study of such systems, we have recently cut our LID error rate by modeling the output of n-gram language models more carefully. Additionally, we are now...

READ MORE