Publications

Refine Results

(Filters Applied) Clear All

Beam combining of ytterbium fiber amplifiers (invited)

Published in:
J. Opt. Soc. Am. B, Vol. 24, No. 8, August 2007, pp. 1707-1715.

Summary

Fiber lasers are well suited to scaling to high average power using beam-combining techniques. For coherent combining, optical phase-noise characterization of a ytterbium fiber amplifier is required to perform a critical evaluation of various approaches to coherent combining. For wavelength beam combining, we demonstrate good beam quality from the combination of three fiber amplifiers, and we discuss system scaling and design trades between laser linewidth, beam width, grating dispersion, and beam quality.
READ LESS

Summary

Fiber lasers are well suited to scaling to high average power using beam-combining techniques. For coherent combining, optical phase-noise characterization of a ytterbium fiber amplifier is required to perform a critical evaluation of various approaches to coherent combining. For wavelength beam combining, we demonstrate good beam quality from the combination...

READ MORE

Orthodox etching of HVPE-grown GaN

Published in:
J. Crystal Growth, Vol. 305, No. 2, July 15, 2007, pp. 384-392 (Proc. of the 4th Int. Workshop on Bulk Nitride Semiconductors IV, 16-22 October 2006).

Summary

Orthodox etching of HVPE-grown GaN in molten eutectic of KOH+NaOH (E etch) and in hot sulfuric and phosphoric acids (HH etch) is discussed in detail. Three size grades of pits are formed by the preferential E etching at the outcrops of threading dislocations on the Ga-polar surface of GaN. Using transmission electron microscopy (TEM) as the calibration tool, it is shown that the largest pits are formed on screw, intermediate on mixed and the smallest on edge dislocations. This sequence of size does not follow the sequence of the Burgers values (and thus the magnitude of the elastic energy) of corresponding dislocations. This discrepancy is explained taking into account the effect of decoration of dislocations, the degree of which is expected to be different depending on the lattice deformation around the dislocations, i.e. on the edge component of the Burgers vector. It is argued that the large scatter of optimal etching temperatures required for revealing all three types of dislocations in HVPE-grown samples from different sources also depends upon the energetic status of dislocations. The role of kinetics for reliability of etching in both etches is discussed and the way of optimization of the etching parameters is shown.
READ LESS

Summary

Orthodox etching of HVPE-grown GaN in molten eutectic of KOH+NaOH (E etch) and in hot sulfuric and phosphoric acids (HH etch) is discussed in detail. Three size grades of pits are formed by the preferential E etching at the outcrops of threading dislocations on the Ga-polar surface of GaN. Using...

READ MORE

Macroscopic workload model for estimating en route sector capacity

Published in:
USA/Europe ATM Seminar, 2-5 July 2007.

Summary

Under ideal weather conditions, each en route sector in an air traffic management (ATM) system has a certain maximum operational traffic density that its controller team can safely handle with nominal traffic flow. We call this the design capacity of the sector. Bad weather and altered flow often reduce sector capacity by increasing controller workload. We refer to sector capacity that is reduced by such conditions as dynamic capacity. When operational conditions cause workload to exceed the capability of a sector's controllers, air traffic managers can respond either by reducing demand or by increasing design capacity. Reducing demand can increase aircraft operating costs and impose delays. Increasing design capacity is usually accomplished by assigning more control resources to the airspace. This increases the cost of ATM. To ensure full utilization of the dynamic capacity and efficient use of the workforce, it is important to accurately characterize the capacity of each sector. Airspace designers often estimate sector capacity using microscopic workload simulations that model each task imposed by each aircraft. However, the complexities of those detailed models limit their real-time operational use, particularly in situations in which sector volumes or flow directions must adapt to changing conditions. To represent design capacity operationally in the United States, traffic flow managers define an acceptable peak traffic count for each sector based on practical experience. These subjective thresholds-while usable in decision-making-do not always reflect the complexity and geometry of the sectors, nor the direction of the traffic flow. We have developed a general macroscopic workload model to quantify the workload impact of traffic density, sector geometry, flow direction, and air-to-air conflict rates. This model provides an objective basis for estimating design capacity. Unlike simulation models, this analytical approach easily extrapolates to new conditions and allows parameter validation by fitting to observed sector traffic counts. The model quantifies coordination and conflict workload as well as observed relationships between sector volume and controller efficiency. The model can support real-time prediction of changes in design capacity when traffic is diverted from nominal routes. It can be used to estimate residual airspace capacity when weather partially blocks a sector. Its ability to identify dominant manual workload factors can also help define the benefits and effectiveness of alternative concepts for automating labor-intensive tasks.
READ LESS

Summary

Under ideal weather conditions, each en route sector in an air traffic management (ATM) system has a certain maximum operational traffic density that its controller team can safely handle with nominal traffic flow. We call this the design capacity of the sector. Bad weather and altered flow often reduce sector...

READ MORE

Arrays of InP-based avalanche photodiodes for photon counting

Summary

Arrays of InP-based avalanche photodiodes (APDs) with InGaAsP absorber regions have been fabricated and characterized in the Geiger mode for photon-counting applications. Measurements of APDs with InGaAsP absorbers optimized for 1.06 um wavelength show dark count rates (DCRs)
READ LESS

Summary

Arrays of InP-based avalanche photodiodes (APDs) with InGaAsP absorber regions have been fabricated and characterized in the Geiger mode for photon-counting applications. Measurements of APDs with InGaAsP absorbers optimized for 1.06 um wavelength show dark count rates (DCRs)

READ MORE

Robust speaker recognition in noisy conditions

Published in:
IEEE. Trans. Speech Audio Process., Vol. 15, No. 5, July 2007, pp. 1711-1723.

Summary

This paper investigates the problem of speaker identification and verification in noisy conditions, assuming that speech signals are corrupted by environmental noise, but knowledge about the noise characteristics is not available. This research is motivated in part by the potential application of speaker recognition technologies on handheld devices or the Internet. While the technologies promise an additional biometric layer of security to protect the user, the practical implementation of such systems faces many challenges. One of these is environmental noise. Due to the mobile nature of such systems, the noise sources can be highly time-varying and potentially unknown. This raises the requirement for noise robustness in the absence of information about the noise. This paper describes a method that combines multicondition model training and missing-feature theory to model noise with unknown temporal-spectral characteristics. Multicondition training is conducted using simulated noisy data with limited noise variation, providing a coarse compensation for the noise, and missing-feature theory is applied to refine the compensation by ignoring noise variation outside the given training conditions, thereby reducing the training and testing mismatch. This paper is focused on several issues relating to the implementation of the new model for real-world applications. These include the generation of multicondition training data to model noisy speech, the combination of different training data to optimize the recognition performance, and the reduction of the model's complexity. The new algorithm was tested using two databases with simulated and realistic noisy speech data. The first database is a redevelopment of the TIMIT database by rerecording the data in the presence of various noise types, used to test the model for speaker identification with a focus on the varieties of noise. The second database is a handheld-device database collected in realistic noisy conditions, used to further validate the model for real-world speaker verification. The new model is compared to baseline systems and is found to achieve lower error rates.
READ LESS

Summary

This paper investigates the problem of speaker identification and verification in noisy conditions, assuming that speech signals are corrupted by environmental noise, but knowledge about the noise characteristics is not available. This research is motivated in part by the potential application of speaker recognition technologies on handheld devices or the...

READ MORE

PANEMOTO: network visualization of security situational awareness through passive analysis

Summary

To maintain effective security situational awareness, administrators require tools that present up-to-date information on the state of the network in the form of 'at-a-glance' displays, and that enable rapid assessment and investigation of relevant security concerns through drill-down analysis capability. In this paper, we present a passive network monitoring tool we have developed to address these important requirements, known a Panemoto (PAssive NEtwork MOnitoring TOol). We show how Panemoto enumerates, describes, and characterizes all network components, including devices and connected networks, and delivers an accurate representation of the function of devices and logical connectivity of networks. We provide examples of Panemoto's output in which the network information is presented in two distinct but related formats: as a clickable network diagram (through the use of NetViz), a commercially available graphical display environment) and as statically-linked HTML pages, viewable in any standard web browser. Together, these presentation techniques enable a more complete understanding of the security situation of the network than each does individually.
READ LESS

Summary

To maintain effective security situational awareness, administrators require tools that present up-to-date information on the state of the network in the form of 'at-a-glance' displays, and that enable rapid assessment and investigation of relevant security concerns through drill-down analysis capability. In this paper, we present a passive network monitoring tool...

READ MORE

Benchmarking the MIT LL HPCMP DHPI system

Published in:
Annual High Performance Computer Modernization Program Users Group Conf., 19-21 June 2007.

Summary

The Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) High Performance Computing Modernization Program (HPCMP) Dedicated High Performance Computing Project Investment (DHPI) system was designed to address interactive algorithm development for Department of Defense (DoD) sensor processing systems. The results of the system acceptance test provide a clear quantitative picture of the capabilities of the system. The system acceptance test for MIT LL HPCMP DHPI hardware involved an array of benchmarks that exercised each of the components of the memory hierarchy, the scheduler, and the disk arrays. These benchmarks isolated the components to verify the functionality and performance of the system, and several system issues were discovered and rectified by using these benchmarks. The memory hierarchy was evaluated using the HPC Challenge benchmark suite, which is comprised of the following benchmarks: High Performance Linpack (HPL, also known as Top 500), Fast Fourier Transform (FFT), STREAM, RandomAccess, and Effective Bandwidth. The compute nodes' Random Array of Independent Disks (RAID) arrays were evaluated with the Iozone benchmark. Finally, the scheduler and the reliability of the entire system were tested using both the HPC Challenge suite and the Iozone benchmark. For example executing the HPC Challenge benchmark suite on 416 processors, the system was able to achieve 1.42 TFlops (HPL), 34.7 GFlops (FFT), 1.24 TBytes/sec (STREAM Triad), and 0.16 GUPS (RandomAccess). This paper describes the components of the MIT Lincoln Laboratory HPCMP DHPI system, including its memory hierarchy. We present the HPC Challenge benchmark suite and Iozone benchmark and describe how each of the component benchmarks stress various components of the TX-2500 system. The results of the benchmarks are discussed, and the implications they have on the performance of the system. We conclude with a presentation of the findings.
READ LESS

Summary

The Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) High Performance Computing Modernization Program (HPCMP) Dedicated High Performance Computing Project Investment (DHPI) system was designed to address interactive algorithm development for Department of Defense (DoD) sensor processing systems. The results of the system acceptance test provide a clear quantitative picture...

READ MORE

Technical challenges of supporting interactive HPC

Published in:
Ann. High Performance Computer Modernization Program Users Group Conf., 19-21 June 2007.

Summary

Users' demand for interactive, on-demand access to a large pool of high performance computing (HPC) resources is increasing. The majority of users at Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) are involved in the interactive development of sensor processing algorithms. This development often requires a large amount of computation due to the complexity of the algorithms being explored and/or the size of the data set being analyzed. These researchers also require rapid turnaround of their jobs because each iteration directly influences code changes made for the following iteration. Historically, batch queue systems have not been a good match for this kind of user. The Lincoln Laboratory Grid (LLGrid) system at MIT-LL is the largest dedicated interactive, on-demand HPC system in the world. While the system also accommodates some batch queue jobs, the vast majority of jobs submitted are interactive, on-demand jobs. Choosing between running a system with a batch queue or in an interactive, on-demand manner involves tradeoffs. This paper discusses the tradeoffs between operating a cluster as a batch system, an interactive, ondemand system, or a hybrid system. The LLGrid system has been operational for over three years, and now serves over 200 users from across Lincoln. The system has run over 100,000 interactive jobs. It has become an integral part of many researchers' algorithm development workflows. For instance, in batch queue systems, an individual user commonly can gain access to 25% of the processors in the system after the job has waited in the queue; in our experience with on-demand, interactive operation, individual users often can also gain access to 20-25% of the cluster processors. This paper will share a variety of the new data on our experiences with running an interactive, on-demand system that also provides some batch queue access. Keywords: grid computing, on-demand, interactive high performance computing, cluster computing, parallel MATLAB.
READ LESS

Summary

Users' demand for interactive, on-demand access to a large pool of high performance computing (HPC) resources is increasing. The majority of users at Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) are involved in the interactive development of sensor processing algorithms. This development often requires a large amount of computation...

READ MORE

Automatic language identification

Published in:
Wiley Encyclopedia of Electrical and Electronics Engineering, Vol. 2, pp. 104-9, 2007.

Summary

Automatic language identification is the process by which the language of digitized spoken words is recognized by a computer. It is one of several processes in which information is extracted automatically from a speech signal.
READ LESS

Summary

Automatic language identification is the process by which the language of digitized spoken words is recognized by a computer. It is one of several processes in which information is extracted automatically from a speech signal.

READ MORE

Integrated compensation network for low mutual coupling of planar microstrip antenna arrays

Published in:
IEEE Antennas and Propagation Society Int. Symp., 2007 Digest, 9-15 June 2007, pp. 1273-6.

Summary

The unavoidable presence of mutual coupling of antenna elements in an array limits the ability to transmit and receive signals concurrently [1]. In the absence of mutual coupling, it is conceivable although still difficult to transmit and receive at the same frequency at the same time, i.e., FM-CW radars. The reflection from the antenna, leakage through the circulator, and any other possible deleterious paths from the high power amplifier to the low noise amplifier must be cancelled or compensated for in some manner to keep the receiver linear. With a single antenna the signal and noise paths are correlated and therefore cancellation of the signal inherently eliminates the noise. However, in an array environment the mutual coupling of antenna elements cause noise from neighboring high power amplifiers to couple into each channel's receiver. While the signal coupling is coherent, the noise is uncorrelated to a degree that depends on the amplifier gain and noise figure. The use of a low mutual coupling antenna array is a critical element in operating systems in this manner.
READ LESS

Summary

The unavoidable presence of mutual coupling of antenna elements in an array limits the ability to transmit and receive signals concurrently [1]. In the absence of mutual coupling, it is conceivable although still difficult to transmit and receive at the same frequency at the same time, i.e., FM-CW radars. The...

READ MORE