Publications

Refine Results

(Filters Applied) Clear All

Parallel VSIPL++: an open standard software library for high-performance parallel signal processing

Published in:
Proc. IEEE, Vol. 93, No. 2 , February 2005, pp. 313-330.

Summary

Real-time signal processing consumes the majority of the world's computing power. Increasingly, programmable parallel processors are used to address a wide variety of signal processing applications (e.g., scientific, video, wireless, medical, communication, encoding, radar, sonar, and imaging). In programmable systems, the major challenge is no longer hardware but software. Specifically, the key technical hurdle lies in allowing the user to write programs at high level, while still achieving performance and preserving the portability of the code across parallel computing hardware platforms. The Parallel Vector, Signal, and Image Processing Library (Parallel VSIPL++) addresses this hurdle by providing high-level C++ array constructs, a simple mechanism for mapping data and functions onto parallel hardware, and a community-defined portable interface. This paper presents an overview of the Parallel VSIPL++ standard as well as a deeper description of the technical foundations and expected performance of the library. Parallel VSIPL++ supports adaptive optimization at many levels. The C++ arrays are designed to support automatic hardware specialization by the compiler. The computation objects (e.g., fast Fourier transforms) are built with explicit setup and run stages to allow for runtime optimization. Parallel arrays and functions in Parallel VSIPL++ also support explicit setup and run stages, which are used to accelerate communication operations. The parallel mapping mechanism provides an external interface that allows optimal mappings to be generated offline and read into the system at runtime. Finally, the standard has been developed in collaboration with high performance embedded computing vendors and is compatible with their proprietary approaches to achieving performance.
READ LESS

Summary

Real-time signal processing consumes the majority of the world's computing power. Increasingly, programmable parallel processors are used to address a wide variety of signal processing applications (e.g., scientific, video, wireless, medical, communication, encoding, radar, sonar, and imaging). In programmable systems, the major challenge is no longer hardware but software. Specifically...

READ MORE

Application of a Relative Development Time Productivity Metric to Parallel Software Development

Published in:
SE-HPCS '05: Proceedings of the second international workshop on Software engineering for high performance computing system applications

Summary

Evaluation of High Performance Computing (HPC) systems should take into account software development time productivity in addition to hardware performance, cost, and other factors. We propose a new metric for HPC software development time productivity, defined as the ratio of relative runtime performance to relative programmer effort. This formula has been used to analyze several HPC benchmark codes and classroom programming assignments. The results of this analysis show consistent trends for various programming models. This method enables a high-level evaluation of development time productivity for a given code implementation, which is essential to the task of estimating cost associated with HPC software development.
READ LESS

Summary

Evaluation of High Performance Computing (HPC) systems should take into account software development time productivity in addition to hardware performance, cost, and other factors. We propose a new metric for HPC software development time productivity, defined as the ratio of relative runtime performance to relative programmer effort. This formula has...

READ MORE

An analysis of wake vortex lidar measurements at LaGuardia Airport

Author:
Published in:
Project Report ATC-318, MIT Lincoln Laboratory

Summary

The majority of research into the wake vortex hazard has concentrated on the in-trail encounter scenario for arrivals. At LaGuardia Airport, wake vortex spacings are applied to arrivals on runway 22 following a heavy departure on the intersecting runway 31, resulting in delay and increased workload for controllers. Previous analysis of this problem led to a recommendation for a measurement campaign to collect data on the behavior of wake vortices generated by departing heavy aircraft. In April of 2004, MIT Lincoln Laboratory deployed its wake vortex lidar system to measure such wakes at LaGuardia. Additionally, wind speed and turbulence data were collected with the hope of correlating wake behavior with the local atmospheric conditions. Analysis of the lidar data indicates that the system was able to acquire and track vortices from departures, a task not proven prior to this deployment. Further, vortices were seen to transport toward the threshold of runway 22, verifying an assumption based on analysis of the winds that wake transport is not a solution in this case. The quantity and type of data collected were insufficient to formulate a clear relationship between atmospheric turbulence and vortex decay. However, it may be possible to develop such a model by exploiting the data gathered during previous lidar deployments.
READ LESS

Summary

The majority of research into the wake vortex hazard has concentrated on the in-trail encounter scenario for arrivals. At LaGuardia Airport, wake vortex spacings are applied to arrivals on runway 22 following a heavy departure on the intersecting runway 31, resulting in delay and increased workload for controllers. Previous analysis...

READ MORE

The MIT Lincoln Laboratory RT-04F diarization systems: applications to broadcast audio and telephone conversations

Published in:
NIST Rich Transcription Workshop, 8-11 November 2004.

Summary

Audio diarization is the process of annotating an input audio channel with information that attributes (possibly overlapping) temporal regions of signal energy to their specific sources. These sources can include particular speakers, music, background noise sources, and other signal source/channel characteristics. Diarization has utility in making automatic transcripts more readable and in searching and indexing audio archives. In this paper we describe the systems developed by MITLL and used in DARPA EARS Rich Transcription Fall 2004 (RT-04F) speaker diarization evaluation. The primary system is based on a new proxy speaker model approach and the secondary system follows a more standard BIC based clustering approach. We present experiments analyzing performance of the systems and present a cross-cluster recombination approach that significantly improves performance. In addition, we also present results applying our system to a telephone speech, summed channel speaker detection task.
READ LESS

Summary

Audio diarization is the process of annotating an input audio channel with information that attributes (possibly overlapping) temporal regions of signal energy to their specific sources. These sources can include particular speakers, music, background noise sources, and other signal source/channel characteristics. Diarization has utility in making automatic transcripts more readable...

READ MORE

Two new experimental protocols for measuring speech transcript readability for timed question-answering tasks

Published in:
Proc. DARPA EARS Rich Translation Workshop, 8-11 November 2004.

Summary

This paper reports results from two recent psycholinguistic experiments that measure the readability of four types of speech transcripts for the DARPA EARS Program. The two key questions in these experiments are (1) how much speech transcript cleanup aids readability and (2) how much the type of cleanup matters. We employ two variants of the four-part figure of merit to measure readability defined at the RT02 workshop and described in our Eurospeech 2003 paper [4] namely: accuracy of answers to comprehension questions, reaction-time for passage reading, reaction-time for question answering and a subjective rating of passage difficulty. The first protocol employs a question-answering task under time pressure. The second employs a self-paced line-by-line paradigm. Both protocols yield similar results: all three types of clean-up in the experiment improve readability 5-10%, but the self-paced reading protocol needs far fewer subjects for statistical significance.
READ LESS

Summary

This paper reports results from two recent psycholinguistic experiments that measure the readability of four types of speech transcripts for the DARPA EARS Program. The two key questions in these experiments are (1) how much speech transcript cleanup aids readability and (2) how much the type of cleanup matters. We...

READ MORE

MIMO radar theory and experimental results

Published in:
38th Asilomar Conf. on Signals, Systems and Computers, Vol. 2, 7-10 November 2004, pp. 300-304.

Summary

The continuing progress of Moore's law has enabled the development of radar systems that simultaneously transmit and receive multiple coded waveforms from multiple phase centers and to process them in ways that have been unavailable in the past. The signals available for processing from these Multiple-Input Multiple-Output (MIMO) radar systems appear as spatial samples corresponding to the convolution of the transmit and receive aperture phase centers. The samples provide the ability to excite and measure the channel that consists of the transmit/receive propagation paths, the target and incidental scattering or clutter. These signals may be processed and combined to form an adaptive coherent transmit beam, or to search an extended area with high resolution in a single dwell. Adaptively combining the received data provides the effect of adaptively controlling the transmit beamshape and the spatial extent provides improved track-while-scan accuracy. This paper describes the theory behind the improved surveillance radar performance and illustrates this with measurements from experimental MIMO radars.
READ LESS

Summary

The continuing progress of Moore's law has enabled the development of radar systems that simultaneously transmit and receive multiple coded waveforms from multiple phase centers and to process them in ways that have been unavailable in the past. The signals available for processing from these Multiple-Input Multiple-Output (MIMO) radar systems...

READ MORE

Robust collaborative multicast service for airborne command and control environment

Summary

RCM (Robust Collaborative Multicast) is a communication service designed to support collaborative applications operating in dynamic, mission-critical environments. RCM implements a set of well-specified message ordering and reliability properties that balance two conflicting goals: a)providing low-latency, highly-available, reliable communication service, and b) guaranteeing global consistency in how different participants perceive their communication. Both of these goals are important for collaborative applications. In this paper, we describe RCM, its modular and flexible design, and a collection of simple, light-weight protocols that implement it. We also report on several experiments with an RCM prototype in a test-bed environment.
READ LESS

Summary

RCM (Robust Collaborative Multicast) is a communication service designed to support collaborative applications operating in dynamic, mission-critical environments. RCM implements a set of well-specified message ordering and reliability properties that balance two conflicting goals: a)providing low-latency, highly-available, reliable communication service, and b) guaranteeing global consistency in how different participants perceive...

READ MORE

Testing static analysis tools using exploitable buffer overflows from open source code

Published in:
Proc. 12th Int. Symp. on Foundations of Software Engineering, ACM SIGSOFT, 31 October - 6 November 2004, pp. 97-106.

Summary

Five modern static analysis tools (ARCHER, BOON, PolySpace C Verifier, Splint, and UNO) were evaluated using source code examples containing 14 exploitable buffer overflow vulnerabilities found in various versions of Sendmail, BIND, and WU-FTPD. Each code example included a "BAD" case with and a "OK" case without buffer overflows. Buffer overflows varied and included stack, heap, bss and data buffers; access above and below buffer bounds; access using pointers, indices, and functions; and scope differences between buffer creation and use. Detection rates for the "BAD" examples were low except for PolySpace and Splint which had average detection rates of 87% and 57%, respectively. However, average false alarm rates were high and roughly 50% for these two tools. On patched programs these two tools produce one warning for every 12 to 46 lines of source code and neither tool accurately distinguished between vulnerable and patched code.
READ LESS

Summary

Five modern static analysis tools (ARCHER, BOON, PolySpace C Verifier, Splint, and UNO) were evaluated using source code examples containing 14 exploitable buffer overflow vulnerabilities found in various versions of Sendmail, BIND, and WU-FTPD. Each code example included a "BAD" case with and a "OK" case without buffer overflows. Buffer...

READ MORE

Compact solid-state sources and their applications

Published in:
SPIE Vol. 5620, Solid State Laser Technologies and Femtosecond Phenomena, 25-28 October 2004, pp. 155-169.

Summary

Coherent solid-state optical sources based on Nd:YAG/Cr4+:YAG passively Q-switched microchip lasers cover the spectral range from 5000 to 200 nm, producing multikilohertz pulse trains with pulse durations as short as 100 ps and peak powers up to 1 MW. The wavelength diversity is achieved through harmonic conversion, parametric conversion, Raman conversion, and microchip-laser-pumped miniature gain-switched lasers. In all cases, the optical heads have been packaged in a volume of less than 0.5 liters. These compact, robust devices have the proven capability to take what were complicated laser-based experiments out of the laboratory and into the field, enabling applications in diverse areas. The short pulses are useful for high-precision ranging using time-of-flight techniques, with applications in 3-dimensional imaging, target identification, and robotics. The short pulse durations and ideal mode properties are also useful for material characterization. The high peak powers can be focused to photoablate material, with applications in laserinduced breakdown spectroscopy and micromachining. Ultraviolet systems have been used to perform fluorescence spectroscopy for applications including environmental monitoring and the detection of biological aerosols. Systems based on passively Q-switched microchip lasers, like the lasers themselves, are small, robust, and potentially low cost, making them ideally suited for field applications.
READ LESS

Summary

Coherent solid-state optical sources based on Nd:YAG/Cr4+:YAG passively Q-switched microchip lasers cover the spectral range from 5000 to 200 nm, producing multikilohertz pulse trains with pulse durations as short as 100 ps and peak powers up to 1 MW. The wavelength diversity is achieved through harmonic conversion, parametric conversion, Raman...

READ MORE

Remotely piloted vehicles in civil airspace: requirements and analysis methods for the traffic alert and collision avoidance system (TCAS) and see-and-avoid systems

Published in:
Proc. of the 23rd Digital Avionics Systems Conf., DASC, Vol. 2, 24-28 October 2004, pp. 12.D.1-1 - 12.D.1.14.

Summary

The integration of Remotely Piloted Vehicles (RF'Vs) into civil airspace will require new methods of ensuring aircraft separation. This paper discusses issues affecting requirements for RPV traffic avoidance systems and for performing the safety evaluations that will be necessary to certify such systems. The paper outlines current ways in which traffic avoidance is assured depending on the type of airspace and type of traffic that is encountered. Alternative methods for RPVs to perform traffic avoidance are discussed, including the potential use of new see-and-avoid sensors or the Traffic Alert and Collision Avoidance System (TCAS). Finally, the paper outlines an established safety evaluation process that can be adapted to assure regulatory authorities that RPVs meet level of safety requirements.
READ LESS

Summary

The integration of Remotely Piloted Vehicles (RF'Vs) into civil airspace will require new methods of ensuring aircraft separation. This paper discusses issues affecting requirements for RPV traffic avoidance systems and for performing the safety evaluations that will be necessary to certify such systems. The paper outlines current ways in which...

READ MORE