Publications

Refine Results

(Filters Applied) Clear All

Automatic parallelization with pMapper

Published in:
2005 IEEE Int. Conf. on Cluster Computing, 27-30 September 2005, 46-51.

Summary

Algorithm implementation efficiency is key to delivering high-performance computing capabilities to demanding, high throughput signal and image processing applications and simulations. Significant progress has been made in optimization of serial programs, but many applications require parallel processing, which brings with it the difficult task of determining efficient mappings of algorithms. The pMapper infrastructure addresses the problem of performance optimization of multistage MATLAB applications on parallel architectures. pMapper is an automatic performance tuning library written as a layer on top of pMatlab: Parallel Matlab toolbox. While pMatlab abstracts the message-passing interface, the responsibility of mapping numerical arrays falls on the user. Choosing the best mapping for a set of numerical arrays is a nontrivial task that requires significant knowledge of programming languages, parallel computing, and processor architecture. pMapper automates the task of map generation. This abstract addresses the design details of pMapper and presents preliminary results.
READ LESS

Summary

Algorithm implementation efficiency is key to delivering high-performance computing capabilities to demanding, high throughput signal and image processing applications and simulations. Significant progress has been made in optimization of serial programs, but many applications require parallel processing, which brings with it the difficult task of determining efficient mappings of algorithms...

READ MORE

Parallel out-of-core Matlab for extreme virtual memory (Abstract)

Published in:
2005 IEEE Int. Conf. on Cluster Computing, 27-30 September 2005, p. 482 [abstract only].

Summary

Large data sets that cannot fit in memory can be addressed with out-of-core methods, which use memory as a "window" to view a section of the data stored on disk at a time. The Parallel Matlab for eXtreme Virtual Memory (pMatlab XVM) library adds out-of-core extensions to the Parallel Matlab (pMatlab) library. We have applied pMatlab XVM to the DARPA High Productivity Computing Systems? HPCchallenge FFT benchmark. The benchmark was run using several different implementations: C+MPI, pMatlab, pMatlab hand coded for out-of-core and pMatlab XVM. These experiments found 1) the performance of the C+MPI and pMatlab versions were comparable; 2) the out-of-core versions deliver 80% of the performance of the in-core versions; 3) the out-of-core versions were able to perform a 1 terabyte (64 billion point) FFT and 4) the pMatlab XVM program was smaller, easier to implement and verify, and more efficient than its hand coded equivalent. We are transitioning this technology to several DoD signal processing applications and plan to apply pMatlab XVM to the full HPCchallenge benchmark suite. Using next generation hardware, problems sizes a factor of 100 to 1000 times larger should be feasible.
READ LESS

Summary

Large data sets that cannot fit in memory can be addressed with out-of-core methods, which use memory as a "window" to view a section of the data stored on disk at a time. The Parallel Matlab for eXtreme Virtual Memory (pMatlab XVM) library adds out-of-core extensions to the Parallel Matlab...

READ MORE

Introduction to parallel programming and pMatlab v2.0

Published in:
Lincoln Laboratory external web site, [2005].

Summary

The computational demands of software continue to outpace the capacities of processor and memory technologies, especially in scientific and engineering programs. One option to improve performance is parallel processing. However, despite decades of research and development, writing parallel programs continues to be difficult. This is especially the case for scientists and engineers who have limited backgrounds in computer science. MATLAB®, due to its ease of use compared to other programming languages like C and Fortran, is one of the most popular languages for implementing numerical computations, thus making it an excellent platform for developing an accessible parallel computing framework. The MIT Lincoln Laboratory has developed two libraries, pMatlab and MatlabMPI, that not only enables parallel programming with MATLAB in a simple fashion, accessible to non-computer scientists. This document will overview basic concepts in parallel programming and introduce pMatlab.
READ LESS

Summary

The computational demands of software continue to outpace the capacities of processor and memory technologies, especially in scientific and engineering programs. One option to improve performance is parallel processing. However, despite decades of research and development, writing parallel programs continues to be difficult. This is especially the case for scientists...

READ MORE

Writing parallel parameter sweep applications with pMATLAB

Published in:
Lincoln Laboratory external web site [2005].

Summary

Parameter sweep applications execute the same piece of code multiple times with unique sets of input parameters. This type of application is extremely amenable to parallelization. This document describes how to parallelize parameter sweep applications with pMATLAB by introducting a simple serial parameter sweep applicaiton written in MATLAB, then parallelizing the application using pMATLAB.
READ LESS

Summary

Parameter sweep applications execute the same piece of code multiple times with unique sets of input parameters. This type of application is extremely amenable to parallelization. This document describes how to parallelize parameter sweep applications with pMATLAB by introducting a simple serial parameter sweep applicaiton written in MATLAB, then parallelizing...

READ MORE

Assessment of aviation delay reduction benefits for nowcasts and short term forecasts

Author:
Published in:
World Weather Research Program Symp. on Nowcasting and Very Short Term Forecasts, 5-9 September 2005.

Summary

This paper investigates methods for quantifying aviation convective weather delay reduction benefits for nowcasts and short term forecasts.
READ LESS

Summary

This paper investigates methods for quantifying aviation convective weather delay reduction benefits for nowcasts and short term forecasts.

READ MORE

FAA tactical weather forecasting in the United States National Airspace

Published in:
World Weather Research Program Symp. on Nowcasting and Very Short Term Forecasts, 5-9 September 2005.

Summary

This paper describes the Tactical 0-2 hour Convective Weather Forecast (CWF) algorithm developed by the MIT LL for the FAA. We will address the algorithm and focus on the key scientific developments. Future directions will also be discussed.
READ LESS

Summary

This paper describes the Tactical 0-2 hour Convective Weather Forecast (CWF) algorithm developed by the MIT LL for the FAA. We will address the algorithm and focus on the key scientific developments. Future directions will also be discussed.

READ MORE

Using a diagnostic corpus of C programs to evaluate buffer overflow detection by static analysis tools

Published in:
10th European Software Engineering Conf., 5-9 September 2005.

Summary

A corpus of 291 small C-program test cases was developed to evaluate static and dynamic analysis tools designed to detect buffer overflows. The corpus was designed and labeled using a new, comprehensive buffer overflow taxonomy. It provides a benchmark to measure detection, false alarm, and confusion rates of tools, and also suggests areas for tool enhancement. Experiments with five tools demonstrate that some modern static analysis tools can accurately detect overflows in simple test cases but that others have serious limitations. For example, PolySpace demonstrated a superior detection rate, missing only one detection. Its performance could be enhanced if extremely long run times were reduced, and false alarms were eliminated for some C library functions. ARCHER performed well with no false alarms whatsoever. It could be enhanced by improving inter-procedural analysis and handling of C library functions. Splint detected significantly fewer overflows and exhibited the highest false alarm rate. Improvements in loop handling and reductions in false alarm rate would make it a much more useful tool. UNO had no false alarms, but missed overflows in roughly half of all test cases. It would need improvement in many areas to become a useful tool. BOON provided the worst performance. It did not detect overflows well in string functions, even though this was a design goal.
READ LESS

Summary

A corpus of 291 small C-program test cases was developed to evaluate static and dynamic analysis tools designed to detect buffer overflows. The corpus was designed and labeled using a new, comprehensive buffer overflow taxonomy. It provides a benchmark to measure detection, false alarm, and confusion rates of tools, and...

READ MORE

Two experiments comparing reading with listening for human processing of conversational telephone speech

Published in:
6th Annual Conf. of the Int. Speech Communication Association, INTERSPEECH 2005, 4-8 September 2005.

Summary

We report on results of two experiments designed to compare subjects' ability to extract information from audio recordings of conversational telephone speech (CTS) with their ability to extract information from text transcripts of these conversations, with and without the ability to hear the audio recordings. Although progress in machine processing of CTS speech is well documented, human processing of these materials has not been as well studied. These experiments compare subject's processing time and comprehension of widely-available CTS data in audio and written formats -- one experiment involves careful reading and one involves visual scanning for information. We observed a very modest improvement using transcripts compared with the audio-only condition for the careful reading task (speed-up by a factor of 1.2) and a much more dramatic improvement using transcripts in the visual scanning task (speed-up by a factor of 2.9). The implications of the experiments are twofold: (1) we expect to see similar gains in human productivity for comparable applications outside the laboratory environment and (2) the gains can vary widely, depending on the specific tasks involved.
READ LESS

Summary

We report on results of two experiments designed to compare subjects' ability to extract information from audio recordings of conversational telephone speech (CTS) with their ability to extract information from text transcripts of these conversations, with and without the ability to hear the audio recordings. Although progress in machine processing...

READ MORE

Automated extraction of weather variables from camera imagery

Published in:
Proc. of 2005 Mid-Continent Transportation Research Symp., 18-19 August 2005.

Summary

Thousands of traffic and safety monitoring cameras are deployed or are being deployed all across the country and throughout the world. These cameras serve a wide range of uses from monitoring building access to adjusting timing cycles of traffic lights at clogged intersections. Currently, these images are typically viewed on a wall of monitors in a traffic operations or security center where observers manually monitor potentially hazardous or congested conditions and notify the appropriate authorities. However, the proliferation of camera imagery taxes the ability of the manual observer to track and respond to all incidents. In addition, the images contain a wealth of information, including visibility, precipitation type, road conditions, camera outages, etc., that often goes unreported because these variables are not always critical or go undetected. Camera deployments continue to expand and the corresponding rapid increases in both the volume and complexity of camera imagery demand that automated algorithms be developed to condense the discernable information into a form that can be easily used operationally by users. MIT Lincoln Laboratory (MIT/LL) under funding from the Federal Highway Administration (FHWA) is investigating new techniques to extract weather and road condition parameters from standard traffic camera imagery. To date, work has focused on developing an algorithm to measure atmospheric visibility and prove the algorithm concept. The initial algorithm examines the natural edges within the image (the horizon, tree lines, roadways, permanent buildings, etc) and performs a comparison of each image with a historical composite image. This comparison enables the system to determine the visibility in the direction of the sensor by detecting which edges are visible and which are not. A primary goal of the automated camera imagery feature extraction system is to ingest digital imagery with limited specific site information such as location, height, angle, and visual extent, thereby making the system easier for users to implement. There are, of course, many challenges in providing a reliable automated estimate of the visibility under all conditions (camera blockage/movement, dirt/raindrops on lens, etc) and the system attempts to compensate for these situations. This paper details the work-to-date on the visibility algorithm and defines a path for further development of the overall system.
READ LESS

Summary

Thousands of traffic and safety monitoring cameras are deployed or are being deployed all across the country and throughout the world. These cameras serve a wide range of uses from monitoring building access to adjusting timing cycles of traffic lights at clogged intersections. Currently, these images are typically viewed on...

READ MORE

Dynamic buffer overflow detection

Published in:
Workshop on Defining the State of the Art in Security Software Tools, 10-11 August 2005.

Summary

The capabilities of seven dynamic buffer overflow detection tools (Chaperon, Valgrind, CCured, CRED, Insure++, ProPolice and TinyCC) are evaluated in this paper. These tools employ different approaches to runtime buffer overflow detection and range from commercial products to open-source gcc-enhancements. A comprehensive testsuite was developed consisting of specifically-designed test cases and model programs containing real-world vulnerabilities. Insure++, CCured and CRED provide the highest buffer overflow detection rates, but only CRED provides an open-source, extensible and scalable solution to detecting buffer overflows. Other tools did not detect off-by-one errors, did not scale to large programs, or performed poorly on complex programs.
READ LESS

Summary

The capabilities of seven dynamic buffer overflow detection tools (Chaperon, Valgrind, CCured, CRED, Insure++, ProPolice and TinyCC) are evaluated in this paper. These tools employ different approaches to runtime buffer overflow detection and range from commercial products to open-source gcc-enhancements. A comprehensive testsuite was developed consisting of specifically-designed test cases...

READ MORE