Publications

Refine Results

(Filters Applied) Clear All

Design considerations for space-based radar phased arrays

Author:
Published in:
2005 IEEE MTT-S Int. Microwave Symp. Digest, 12-17 June 2005, pp. 1631-1634.

Summary

Space Based Radar (SBR) is being considered as a means to provide persistent global surveillance. In order to be effective, the SBR system must be capable of high area coverage rates, low minimum detectable velocities (MDV), accurate geolocation, high range resolution, and robustness against electronic interference. These objectives will impose challenging requirements on the antenna array, including wide-angle electronic scanning, wide instantaneous bandwidth, large poweraperture product, low sidelobe radiation patterns, lightweight deployable structures, multiple array phase centers, and adaptive pattern synthesis. This paper will discuss key enabling technologies for low earth orbit (LEO) SBR arrays including high efficiency transmit/receive modules and multilayer tile architectures, and the parametric influence of array design variables on the SBR system.
READ LESS

Summary

Space Based Radar (SBR) is being considered as a means to provide persistent global surveillance. In order to be effective, the SBR system must be capable of high area coverage rates, low minimum detectable velocities (MDV), accurate geolocation, high range resolution, and robustness against electronic interference. These objectives will impose...

READ MORE

Dynamic buffer overflow detection

Published in:
Workshop on the Evaluation of Software Defect Detection Tools, 10 June 2005.

Summary

The capabilities of seven dynamic buffer overflow detection tools (Chaperon, Valgrind, CCured, CRED, Insure++, ProPolice and TinyCC) are evaluated in this paper. These tools employ different approaches to runtime buffer overflow detection and range from commercial products to open source gcc-enhancements. A comprehensive test suite was developed consisting of specifically-designed test cases and model programs containing real-world vulnerabilities. Insure++, CCured and CRED provide the highest buffer overflow detection rates, but only CRED provides an open-source, extensible and scalable solution to detecting buffer overflows. Other tools did not detect off-by-one errors, did not scale to large programs, or performed poorly on complex programs.
READ LESS

Summary

The capabilities of seven dynamic buffer overflow detection tools (Chaperon, Valgrind, CCured, CRED, Insure++, ProPolice and TinyCC) are evaluated in this paper. These tools employ different approaches to runtime buffer overflow detection and range from commercial products to open source gcc-enhancements. A comprehensive test suite was developed consisting of specifically-designed...

READ MORE

Application of a development time productivity metric to parallel software development

Published in:
SE-HPCS '05, 2nd Int. Worskhop on Software Engineering for High Performance Computing System Applications, 15 May 2005, pp. 8-12.

Summary

Evaluation of High Performance Computing (HPC) systems should take into account software development time productivity in addition to hardware performance, cost, and other factors. We propose a new metric for HPC software development time productivity, defined as the ratio of relative runtime performance to relative programmer effort. This formula has been used to analyze several HPC benchmark codes and classroom programming assignments. The results of this analysis show consistent trends for various programming models. This method enables a high-level evaluation of development time productivity for a given code implementation, which is essential to the task of estimating cost associated with HPC software development.
READ LESS

Summary

Evaluation of High Performance Computing (HPC) systems should take into account software development time productivity in addition to hardware performance, cost, and other factors. We propose a new metric for HPC software development time productivity, defined as the ratio of relative runtime performance to relative programmer effort. This formula has...

READ MORE

Measuring translation quality by testing English speakers with a new Defense Language Proficiency Test for Arabic

Published in:
Int. Conf. on Intelligence Analysis, 2-5 May 2005.

Summary

We present results from an experiment in which educated English-native speakers answered questions from a machine translated version of a standardized Arabic language test. We compare the machine translation (MT) results with professional reference translations as a baseline for the purpose of determining the level of Arabic reading comprehension that current machine translation technology enables an English speaker to achieve. Furthermore, we explore the relationship between the current, broadly accepted automatic measures of performance for machine translation and the Defense Language Proficiency Test, a broadly accepted measure of effectiveness for evaluating foreign language proficiency. In doing so, we intend to help translate MT system performance into terms that are meaningful for satisfying Government foreign language processing requirements. The results of this experiment suggest that machine translation may enable Interagency Language Roundtable Level 2 performance, but is not yet adequate to achieve ILR Level 3. Our results are based on 69 human subjects reading 68 documents and answering 173 questions, giving a total of 4,692 timed document trials and 7,950 question trials. We propose Level 3 as a reasonable nearterm target for machine translation research and development.
READ LESS

Summary

We present results from an experiment in which educated English-native speakers answered questions from a machine translated version of a standardized Arabic language test. We compare the machine translation (MT) results with professional reference translations as a baseline for the purpose of determining the level of Arabic reading comprehension that...

READ MORE

Laser beam combining for high-power, high-radiance sources

Author:
Published in:
IEEE J. Sel. Top. Quantum Electron., Vol. 11, No. 3, May/June 2005, pp. 567-577.

Summary

Beam combining of laser arrays with high efficiency and good beam quality for power and radiance (brightness) scaling is a long-standing problem in laser technology. Recently, significant progress has been made usingwavelength (spectral) techniques and coherent (phased array) techniques, which has led to the demonstration of beam combining of a large semiconductor diode laser array (100 array elements) with near-diffraction-limited output (M2 ~ 1.3) at significant power (35 W). This paper provides an overview of progress in beam combining and highlights some of the tradeoffs among beam-combining techniques.
READ LESS

Summary

Beam combining of laser arrays with high efficiency and good beam quality for power and radiance (brightness) scaling is a long-standing problem in laser technology. Recently, significant progress has been made usingwavelength (spectral) techniques and coherent (phased array) techniques, which has led to the demonstration of beam combining of a...

READ MORE

Multi-PRI signal processing for the terminal Doppler weather radar, part I: clutter filtering

Author:
Published in:
J. Atmos. Ocean. Technol., Vol. 22, May 2005, pp. 575-582.

Summary

Multiple pulse repetition interval (multi-PRI) transmission is part of an adaptive signal transmission and processing algorithm being developed to aggressively combat range-velocity ambiguity in weather radars. In the past, operational use of multi-PRI pulse trains has been hampered due to the difficulty in clutter filtering. This paper presents finite impulse response clutter filter designs for multi-PRI signals with excellent magnitude and phase responses. These filters provide strong suppression for use on low-elevation scans and yield low biases of velocity estimates so that accurate velocity dealiasing is possible. Specifically, the filters are designed for use in the Terminal Doppler Weather Radar (TDWR) and are shown to meet base data bias requirements equivalent to the Federal Aviation Administration's specifications for the current TDWR clutter filters. Also an adaptive filter selection algorithm is proposed that bases its decision on clutter power estimated during an initial long-PRI surveillance scan. Simulations show that this adaptive algorithm yields satisfactory biases for reflectivity, velocity, and spectral width. Implementation of such a scheme would enable automatic elimination of anomalous propagation signals and constant adjustment to evolving ground clutter conditions, an improvement over the current TDWR clutter filtering system.
READ LESS

Summary

Multiple pulse repetition interval (multi-PRI) transmission is part of an adaptive signal transmission and processing algorithm being developed to aggressively combat range-velocity ambiguity in weather radars. In the past, operational use of multi-PRI pulse trains has been hampered due to the difficulty in clutter filtering. This paper presents finite impulse...

READ MORE

Using leader-based communication to improve the scalability of single-round group membership algorithms

Published in:
IPDPS 2005: 19th Int. Parallel and Distributed Processing Symp., 4-8 April 2005, pp. 280-287.

Summary

Sigma, the first single-round group membership (GM) algorithm, was recently introduced and demonstrated to operate consistently with theoretical expectations in a simulated WAN environment. Sigma achieved similar quality of membership configurations as existing algorithms but required fewer message exchange rounds. We now consider Sigma in terms of scalability. Sigma involves all-to-all (A2A) type of communication among members. A2A protocols have been shown to perform worse than leader-based (LB) protocols in certain networks, due to greater message overhead and higher likelihood of message loss. Thus, although LB protocols often involve additional communication steps, they can be more efficient in practice, particularly in fault-prone networks with large numbers of participating nodes. In this paper, we present Leader-Based Sigma, which transforms the original all-to-all version into a more scalable centralized communication scheme, and discuss the rounds vs. messages tradeoff involved in optimizing GM algorithms for deployment in large-scale, fault-prone dynamic network environments.
READ LESS

Summary

Sigma, the first single-round group membership (GM) algorithm, was recently introduced and demonstrated to operate consistently with theoretical expectations in a simulated WAN environment. Sigma achieved similar quality of membership configurations as existing algorithms but required fewer message exchange rounds. We now consider Sigma in terms of scalability. Sigma involves...

READ MORE

An annotated review of past papers on attack graphs

Published in:
MIT Lincoln Laboratory Report IA-1

Summary

This report reviews past research papers that describe how to construct attack graphs, how to use them to improve security of computer networks, and how to use them to analyze alerts from intrusion detection systems. Two commercial systems are described [I, 2], and a summary table compares important characteristics of past research studies. For each study, information is provided on the number of attacker goals, how graphs are constructed, sizes of networks analyzed, how well the approach scales to larger networks, and the general approach. Although research has made significant progress in the past few years, no system has analyzed networks with more than 20 hosts, and computation for most approaches scales poorly and would be impractical for networks with more than even a few hundred hosts. Current approaches also are limited because many require extensive and difficult-to-obtain details on attacks, many assume that host-to-host reachability information between all hosts is already available, and many produce an attack graph but do not automatically generate recommendations from that graph. Researchers have suggested promising approaches to alleviate some of these limitations, including grouping hosts to improve scaling, using worst-case default values for unknown attack details, and symbolically analyzing attack graphs to generate recommendations that improve security for critical hosts. Future research should explore these and other approaches to develop attack graph construction and analysis algorithms that can be applied to large enterprise networks.
READ LESS

Summary

This report reviews past research papers that describe how to construct attack graphs, how to use them to improve security of computer networks, and how to use them to analyze alerts from intrusion detection systems. Two commercial systems are described [I, 2], and a summary table compares important characteristics of...

READ MORE

Speaker adaptive cohort selection for Tnorm in text-independent speaker verification

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 1, 19-23 March 2005, pp. I-741 - I-744.

Summary

In this paper we discuss an extension to the widely used score normalization technique of test normalization (Tnorm) for text-independent speaker verification. A new method of speaker Adaptive-Tnorm that offers advantages over the standard Tnorm by adjusting the speaker set to the target model is presented. Examples of this improvement using the 2004 NIST SRE data are also presented.
READ LESS

Summary

In this paper we discuss an extension to the widely used score normalization technique of test normalization (Tnorm) for text-independent speaker verification. A new method of speaker Adaptive-Tnorm that offers advantages over the standard Tnorm by adjusting the speaker set to the target model is presented. Examples of this improvement...

READ MORE

Measuring human readability of machine generated text: three case studies in speech recognition and machine translation

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 5, ICASSP, 19-23 March 2005, pp. V-1009 - V-1012.

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard Defense Language Proficiency Test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down about 25% when reading system STT output, (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT* and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks.
READ LESS

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each...

READ MORE