Publications

Refine Results

(Filters Applied) Clear All

Using leader-based communication to improve the scalability of single-round group membership algorithms

Published in:
IPDPS 2005: 19th Int. Parallel and Distributed Processing Symp., 4-8 April 2005, pp. 280-287.

Summary

Sigma, the first single-round group membership (GM) algorithm, was recently introduced and demonstrated to operate consistently with theoretical expectations in a simulated WAN environment. Sigma achieved similar quality of membership configurations as existing algorithms but required fewer message exchange rounds. We now consider Sigma in terms of scalability. Sigma involves all-to-all (A2A) type of communication among members. A2A protocols have been shown to perform worse than leader-based (LB) protocols in certain networks, due to greater message overhead and higher likelihood of message loss. Thus, although LB protocols often involve additional communication steps, they can be more efficient in practice, particularly in fault-prone networks with large numbers of participating nodes. In this paper, we present Leader-Based Sigma, which transforms the original all-to-all version into a more scalable centralized communication scheme, and discuss the rounds vs. messages tradeoff involved in optimizing GM algorithms for deployment in large-scale, fault-prone dynamic network environments.
READ LESS

Summary

Sigma, the first single-round group membership (GM) algorithm, was recently introduced and demonstrated to operate consistently with theoretical expectations in a simulated WAN environment. Sigma achieved similar quality of membership configurations as existing algorithms but required fewer message exchange rounds. We now consider Sigma in terms of scalability. Sigma involves...

READ MORE

An annotated review of past papers on attack graphs

Published in:
MIT Lincoln Laboratory Report IA-1

Summary

This report reviews past research papers that describe how to construct attack graphs, how to use them to improve security of computer networks, and how to use them to analyze alerts from intrusion detection systems. Two commercial systems are described [I, 2], and a summary table compares important characteristics of past research studies. For each study, information is provided on the number of attacker goals, how graphs are constructed, sizes of networks analyzed, how well the approach scales to larger networks, and the general approach. Although research has made significant progress in the past few years, no system has analyzed networks with more than 20 hosts, and computation for most approaches scales poorly and would be impractical for networks with more than even a few hundred hosts. Current approaches also are limited because many require extensive and difficult-to-obtain details on attacks, many assume that host-to-host reachability information between all hosts is already available, and many produce an attack graph but do not automatically generate recommendations from that graph. Researchers have suggested promising approaches to alleviate some of these limitations, including grouping hosts to improve scaling, using worst-case default values for unknown attack details, and symbolically analyzing attack graphs to generate recommendations that improve security for critical hosts. Future research should explore these and other approaches to develop attack graph construction and analysis algorithms that can be applied to large enterprise networks.
READ LESS

Summary

This report reviews past research papers that describe how to construct attack graphs, how to use them to improve security of computer networks, and how to use them to analyze alerts from intrusion detection systems. Two commercial systems are described [I, 2], and a summary table compares important characteristics of...

READ MORE

Speaker adaptive cohort selection for Tnorm in text-independent speaker verification

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 1, 19-23 March 2005, pp. I-741 - I-744.

Summary

In this paper we discuss an extension to the widely used score normalization technique of test normalization (Tnorm) for text-independent speaker verification. A new method of speaker Adaptive-Tnorm that offers advantages over the standard Tnorm by adjusting the speaker set to the target model is presented. Examples of this improvement using the 2004 NIST SRE data are also presented.
READ LESS

Summary

In this paper we discuss an extension to the widely used score normalization technique of test normalization (Tnorm) for text-independent speaker verification. A new method of speaker Adaptive-Tnorm that offers advantages over the standard Tnorm by adjusting the speaker set to the target model is presented. Examples of this improvement...

READ MORE

Measuring human readability of machine generated text: three case studies in speech recognition and machine translation

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 5, ICASSP, 19-23 March 2005, pp. V-1009 - V-1012.

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard Defense Language Proficiency Test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down about 25% when reading system STT output, (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT* and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks.
READ LESS

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each...

READ MORE

The 2004 MIT Lincoln Laboratory speaker recognition system

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 1, 19-23 March 2005, pp. I-177 - I-180.

Summary

The MIT Lincoln Laboratory submission for the 2004 NIST Speaker Recognition Evaluation (SRE) was built upon seven core systems using speaker information from short-term acoustics, pitch and duration prosodic behavior, and phoneme and word usage. These different levels of information were modeled and classified using Gaussian Mixture Models, Support Vector Machines and N-gram language models and were combined using a single layer perception fuser. The 2004 SRE used a new multi-lingual, multi-channel speech corpus that provided a challenging speaker detection task for the above systems. In this paper we describe the core systems used and provide an overview of their performance on the 2004 SRE detection tasks.
READ LESS

Summary

The MIT Lincoln Laboratory submission for the 2004 NIST Speaker Recognition Evaluation (SRE) was built upon seven core systems using speaker information from short-term acoustics, pitch and duration prosodic behavior, and phoneme and word usage. These different levels of information were modeled and classified using Gaussian Mixture Models, Support Vector...

READ MORE

Design considerations and results for an overlapped subarray radar antenna

Summary

Overlapped subarray networks produce flattopped sector patterns with low sidelobes that suppress grating lobes outside of the main beam of the subarray pattern. They are typically used in limited scan applications, where it is desired to minimize the number of controls required to steer the beam. However, the architecture of an overlapped subarray antenna includes many signal crossovers and a wide variation in splitting/combining ratios, which make it difficult to maintain required error tolerances. This paper presents the design considerations and results for an overlapped subarray radar antenna, including a custom subarray weighting function and the corresponding circuit design and fabrication. Measured pattern results will be shown for a prototype design compared with desired patterns.
READ LESS

Summary

Overlapped subarray networks produce flattopped sector patterns with low sidelobes that suppress grating lobes outside of the main beam of the subarray pattern. They are typically used in limited scan applications, where it is desired to minimize the number of controls required to steer the beam. However, the architecture of...

READ MORE

Evaluating static analysis tools for detecting buffer overflows in C code

Published in:
Thesis (MLA)--Harvard University, 2005.

Summary

This project evaluated five static analysis tools using a diagnostic test suite to determine their strengths and weaknesses in detecting a variety of buffer overflow flaws in C code. Detection, false alarm, and confusion rates were measured, along with execution time. PolySpace demonstrated a superior detection rate on the basic test suite, missing only one out of a possible 291 detections. It may benefit from improving its treatment of signal handlers, and reducing both its false alarm rate (particularly for C library functions) and execution time. ARCHER performed quite well with no false alarms whatsoever; a few key enhancements, such as in its inter-procedural analysis and handling of C library functions, would boost its detection rate and should improve its performance on real-world code. Splint detected significantly fewer overflows and exhibited the highest false alarm rate. Improvements in its loop handling, and reductions in its false alarm rate would make it a much more useful tool. UNO had no false alarms, but missed a broad variety of overflows amounting to nearly half of the possible detections in the test suite. It would need improvement in many areas to become a very useful tool. BOON was clearly at the back of the pack, not even performing well on the subset of test cases where it could have been expected to function. The project also provides a buffer overflow taxonomy, along with a test suite generator and other tools, that can be used by others to evaluate code analysis tools with respect to buffer overflow detection.
READ LESS

Summary

This project evaluated five static analysis tools using a diagnostic test suite to determine their strengths and weaknesses in detecting a variety of buffer overflow flaws in C code. Detection, false alarm, and confusion rates were measured, along with execution time. PolySpace demonstrated a superior detection rate on the basic...

READ MORE

Nanocomposite approaches toward pellicles for 157-nm lithography

Published in:
J. Microlith., Microfab., Microsyst., Vol. 4, No. 1, January-March 2005, pp. 013004-1 - 013004-6.

Summary

Pellicle materials for use at 157 nm must display sufficient transparency at this wavelength and adequate lifetimes to be useful. We blended a leading candidate fluoropolymer with silica nanoparticles to examine the effect on both the transparency and lifetime of the pellicle. It is anticipated that these composite materials may increase the lifetime by perhaps quenching reactive species and/or by dilution, without severely decreasing the 157-nm transmission. Particles surface-modified with fluorinated moieties are also investigated. The additives are introduced as stable nanoparticle dispersions to casting solutions of the fluoropolymers. The properties of these solutions, films, and the radiationinduced darkening rates are reported. The latter are reduced in proportion to the dilution of the polymer, but there is no evidence that the nanoparticles act as radical scavengers.
READ LESS

Summary

Pellicle materials for use at 157 nm must display sufficient transparency at this wavelength and adequate lifetimes to be useful. We blended a leading candidate fluoropolymer with silica nanoparticles to examine the effect on both the transparency and lifetime of the pellicle. It is anticipated that these composite materials may...

READ MORE

Technology requirements for supporting on-demand interactive grid computing

Summary

It is increasingly being recognized that a large pool of High Performance Computing (HPC) users requires interactive, on-demand access to HPC resources. How to provide these resources is a significant technical challenge that can be addressed from two directions. The first approach is to adapt existing batch queue based HPC systems to make them more interactive. The second approach is to start with existing interactive desktop environments (e.g., MATLAB) and design a system from the ground up that allows interactive parallel computing. The Lincoln Laboratory Grid (LLGrid) project has taken the latter approach. The LLGrid system has been operational for over a year with a few hundred processors and roughly 70 users, having run over 13,000 interactive jobs and consumed approximately 10,000 processor days of computation. This paper compares the on-demand and interactive computing features of four prominent batch queuing systems: openPBS, Sun GridEngine, Condor, and LSF. It goes on to briefly describe the LLGrid system, and how interactive, on-demand computing was achieved on it by binding to a resource management system. Finally, usage characteristics of the LLGrid system are discussed.
READ LESS

Summary

It is increasingly being recognized that a large pool of High Performance Computing (HPC) users requires interactive, on-demand access to HPC resources. How to provide these resources is a significant technical challenge that can be addressed from two directions. The first approach is to adapt existing batch queue based HPC...

READ MORE

Parallel VSIPL++: an open standard software library for high-performance parallel signal processing

Published in:
Proc. IEEE, Vol. 93, No. 2 , February 2005, pp. 313-330.

Summary

Real-time signal processing consumes the majority of the world's computing power. Increasingly, programmable parallel processors are used to address a wide variety of signal processing applications (e.g., scientific, video, wireless, medical, communication, encoding, radar, sonar, and imaging). In programmable systems, the major challenge is no longer hardware but software. Specifically, the key technical hurdle lies in allowing the user to write programs at high level, while still achieving performance and preserving the portability of the code across parallel computing hardware platforms. The Parallel Vector, Signal, and Image Processing Library (Parallel VSIPL++) addresses this hurdle by providing high-level C++ array constructs, a simple mechanism for mapping data and functions onto parallel hardware, and a community-defined portable interface. This paper presents an overview of the Parallel VSIPL++ standard as well as a deeper description of the technical foundations and expected performance of the library. Parallel VSIPL++ supports adaptive optimization at many levels. The C++ arrays are designed to support automatic hardware specialization by the compiler. The computation objects (e.g., fast Fourier transforms) are built with explicit setup and run stages to allow for runtime optimization. Parallel arrays and functions in Parallel VSIPL++ also support explicit setup and run stages, which are used to accelerate communication operations. The parallel mapping mechanism provides an external interface that allows optimal mappings to be generated offline and read into the system at runtime. Finally, the standard has been developed in collaboration with high performance embedded computing vendors and is compatible with their proprietary approaches to achieving performance.
READ LESS

Summary

Real-time signal processing consumes the majority of the world's computing power. Increasingly, programmable parallel processors are used to address a wide variety of signal processing applications (e.g., scientific, video, wireless, medical, communication, encoding, radar, sonar, and imaging). In programmable systems, the major challenge is no longer hardware but software. Specifically...

READ MORE