Publications

Refine Results

(Filters Applied) Clear All

Variable projection and unfolding in compressed sensing

Published in:
Proc. 14th IEEE/SP Workshop on Statistical Signal Processing, 26-28 August 2007, pp. 358-362.

Summary

The performance of linear programming techniques that are applied in the signal identification and reconstruction process in compressed sensing (CS) is governed by both the number of measurements taken and the number of nonzero coefficients in the discrete basis used to represent the signal. To enhance the capabilities of CS, we have developed a technique called Variable Projection and Unfolding (VPU). VPU extends the identification and reconstruction capability of linear programming techniques to signals with a much greater number of nonzero coefficients in the basis in which the signals are compressible with significantly better reconstruction error.
READ LESS

Summary

The performance of linear programming techniques that are applied in the signal identification and reconstruction process in compressed sensing (CS) is governed by both the number of measurements taken and the number of nonzero coefficients in the discrete basis used to represent the signal. To enhance the capabilities of CS...

READ MORE

Multifocal multiphoton microscopy (MMM) at a frame rate beyond 600 Hz

Published in:
Opt. Express, Vol. 15, No. 17, 20 August 2007, pp. 10998-11005.

Summary

We introduce a multiphoton microscope for high-speed three-dimensional (3D) fluorescence imaging. The system combines parallel illumination by a multifocal multiphoton microscope (MMM) with parallel detection via a segmented high-sensitivity charge-couple device (CCD) camera. The instrument consists of a Ti-sapphire laser illuminating a microlens array that projects 36 foci onto the focal plane. The foci are scanned using a resonance scanner and imaged with a custom-made CCD camera. The MMM increases the imaging speed by parallelizing the illumination; the CCD camera can operate at a frame rate of 1428 Hz while maintaining a low read noise of 11 electrons per pixel by dividing its chip into 16 independent segments for parallelized readout. We image fluorescent specimens at a frame rate of 640 Hz. The calcium wave of fluo3 labeled cardiac myocytes is measured by imaging the spontaneous contraction of the cells in a 0.625 second sequence movie, consisting of 400 single images.
READ LESS

Summary

We introduce a multiphoton microscope for high-speed three-dimensional (3D) fluorescence imaging. The system combines parallel illumination by a multifocal multiphoton microscope (MMM) with parallel detection via a segmented high-sensitivity charge-couple device (CCD) camera. The instrument consists of a Ti-sapphire laser illuminating a microlens array that projects 36 foci onto the...

READ MORE

Analysis of ground surveillance assets to support Global Hawk airspace access at Beale Air Force Base

Summary

This study, performed from May 2006 to January 2007 by MIT Lincoln Laboratory, investigated the feasibility of providing ground-sensor-based traffic data directly to Global Hawk operators at Beale AFB. The system concept involves detecting and producing tracks for all cooperative (transponder-equipped) and non-cooperative aircraft from the surface to 18,000 ft MSL, extending from the Beale AFB Class C airspace cylinder northward to the China Military Operations Area (MOA). Data from multiple sensors can be fused together to create a comprehensive air surveillance picture, with the altitudes of non-cooperative targets estimated by fusing returns from all available sensor data. Such a capability, if accepted by the FAA, could mitigate the need for Temporary Flight Restrictions (TFR) to satisfy Certificate of Waiver or Authorization (COA) requirements. There are no existing specifications for ground-sensor-based Unmanned Aerial Systems (UAS) traffic avoidance procedures, nor is it yet known how precisely altitude needs to be estimated. It may be possible to avoid traffic laterally, in which case traffic altitude need not be known accurately. If, however, it is necessary to also avoid traffic vertically, then altitudes will need to be estimated to some (as yet undefined) level of accuracy.
READ LESS

Summary

This study, performed from May 2006 to January 2007 by MIT Lincoln Laboratory, investigated the feasibility of providing ground-sensor-based traffic data directly to Global Hawk operators at Beale AFB. The system concept involves detecting and producing tracks for all cooperative (transponder-equipped) and non-cooperative aircraft from the surface to 18,000 ft...

READ MORE

Beam combining of ytterbium fiber amplifiers (invited)

Published in:
J. Opt. Soc. Am. B, Vol. 24, No. 8, August 2007, pp. 1707-1715.

Summary

Fiber lasers are well suited to scaling to high average power using beam-combining techniques. For coherent combining, optical phase-noise characterization of a ytterbium fiber amplifier is required to perform a critical evaluation of various approaches to coherent combining. For wavelength beam combining, we demonstrate good beam quality from the combination of three fiber amplifiers, and we discuss system scaling and design trades between laser linewidth, beam width, grating dispersion, and beam quality.
READ LESS

Summary

Fiber lasers are well suited to scaling to high average power using beam-combining techniques. For coherent combining, optical phase-noise characterization of a ytterbium fiber amplifier is required to perform a critical evaluation of various approaches to coherent combining. For wavelength beam combining, we demonstrate good beam quality from the combination...

READ MORE

Orthodox etching of HVPE-grown GaN

Published in:
J. Crystal Growth, Vol. 305, No. 2, July 15, 2007, pp. 384-392 (Proc. of the 4th Int. Workshop on Bulk Nitride Semiconductors IV, 16-22 October 2006).

Summary

Orthodox etching of HVPE-grown GaN in molten eutectic of KOH+NaOH (E etch) and in hot sulfuric and phosphoric acids (HH etch) is discussed in detail. Three size grades of pits are formed by the preferential E etching at the outcrops of threading dislocations on the Ga-polar surface of GaN. Using transmission electron microscopy (TEM) as the calibration tool, it is shown that the largest pits are formed on screw, intermediate on mixed and the smallest on edge dislocations. This sequence of size does not follow the sequence of the Burgers values (and thus the magnitude of the elastic energy) of corresponding dislocations. This discrepancy is explained taking into account the effect of decoration of dislocations, the degree of which is expected to be different depending on the lattice deformation around the dislocations, i.e. on the edge component of the Burgers vector. It is argued that the large scatter of optimal etching temperatures required for revealing all three types of dislocations in HVPE-grown samples from different sources also depends upon the energetic status of dislocations. The role of kinetics for reliability of etching in both etches is discussed and the way of optimization of the etching parameters is shown.
READ LESS

Summary

Orthodox etching of HVPE-grown GaN in molten eutectic of KOH+NaOH (E etch) and in hot sulfuric and phosphoric acids (HH etch) is discussed in detail. Three size grades of pits are formed by the preferential E etching at the outcrops of threading dislocations on the Ga-polar surface of GaN. Using...

READ MORE

Macroscopic workload model for estimating en route sector capacity

Published in:
USA/Europe ATM Seminar, 2-5 July 2007.

Summary

Under ideal weather conditions, each en route sector in an air traffic management (ATM) system has a certain maximum operational traffic density that its controller team can safely handle with nominal traffic flow. We call this the design capacity of the sector. Bad weather and altered flow often reduce sector capacity by increasing controller workload. We refer to sector capacity that is reduced by such conditions as dynamic capacity. When operational conditions cause workload to exceed the capability of a sector's controllers, air traffic managers can respond either by reducing demand or by increasing design capacity. Reducing demand can increase aircraft operating costs and impose delays. Increasing design capacity is usually accomplished by assigning more control resources to the airspace. This increases the cost of ATM. To ensure full utilization of the dynamic capacity and efficient use of the workforce, it is important to accurately characterize the capacity of each sector. Airspace designers often estimate sector capacity using microscopic workload simulations that model each task imposed by each aircraft. However, the complexities of those detailed models limit their real-time operational use, particularly in situations in which sector volumes or flow directions must adapt to changing conditions. To represent design capacity operationally in the United States, traffic flow managers define an acceptable peak traffic count for each sector based on practical experience. These subjective thresholds-while usable in decision-making-do not always reflect the complexity and geometry of the sectors, nor the direction of the traffic flow. We have developed a general macroscopic workload model to quantify the workload impact of traffic density, sector geometry, flow direction, and air-to-air conflict rates. This model provides an objective basis for estimating design capacity. Unlike simulation models, this analytical approach easily extrapolates to new conditions and allows parameter validation by fitting to observed sector traffic counts. The model quantifies coordination and conflict workload as well as observed relationships between sector volume and controller efficiency. The model can support real-time prediction of changes in design capacity when traffic is diverted from nominal routes. It can be used to estimate residual airspace capacity when weather partially blocks a sector. Its ability to identify dominant manual workload factors can also help define the benefits and effectiveness of alternative concepts for automating labor-intensive tasks.
READ LESS

Summary

Under ideal weather conditions, each en route sector in an air traffic management (ATM) system has a certain maximum operational traffic density that its controller team can safely handle with nominal traffic flow. We call this the design capacity of the sector. Bad weather and altered flow often reduce sector...

READ MORE

Arrays of InP-based avalanche photodiodes for photon counting

Summary

Arrays of InP-based avalanche photodiodes (APDs) with InGaAsP absorber regions have been fabricated and characterized in the Geiger mode for photon-counting applications. Measurements of APDs with InGaAsP absorbers optimized for 1.06 um wavelength show dark count rates (DCRs)
READ LESS

Summary

Arrays of InP-based avalanche photodiodes (APDs) with InGaAsP absorber regions have been fabricated and characterized in the Geiger mode for photon-counting applications. Measurements of APDs with InGaAsP absorbers optimized for 1.06 um wavelength show dark count rates (DCRs)

READ MORE

Robust speaker recognition in noisy conditions

Published in:
IEEE. Trans. Speech Audio Process., Vol. 15, No. 5, July 2007, pp. 1711-1723.

Summary

This paper investigates the problem of speaker identification and verification in noisy conditions, assuming that speech signals are corrupted by environmental noise, but knowledge about the noise characteristics is not available. This research is motivated in part by the potential application of speaker recognition technologies on handheld devices or the Internet. While the technologies promise an additional biometric layer of security to protect the user, the practical implementation of such systems faces many challenges. One of these is environmental noise. Due to the mobile nature of such systems, the noise sources can be highly time-varying and potentially unknown. This raises the requirement for noise robustness in the absence of information about the noise. This paper describes a method that combines multicondition model training and missing-feature theory to model noise with unknown temporal-spectral characteristics. Multicondition training is conducted using simulated noisy data with limited noise variation, providing a coarse compensation for the noise, and missing-feature theory is applied to refine the compensation by ignoring noise variation outside the given training conditions, thereby reducing the training and testing mismatch. This paper is focused on several issues relating to the implementation of the new model for real-world applications. These include the generation of multicondition training data to model noisy speech, the combination of different training data to optimize the recognition performance, and the reduction of the model's complexity. The new algorithm was tested using two databases with simulated and realistic noisy speech data. The first database is a redevelopment of the TIMIT database by rerecording the data in the presence of various noise types, used to test the model for speaker identification with a focus on the varieties of noise. The second database is a handheld-device database collected in realistic noisy conditions, used to further validate the model for real-world speaker verification. The new model is compared to baseline systems and is found to achieve lower error rates.
READ LESS

Summary

This paper investigates the problem of speaker identification and verification in noisy conditions, assuming that speech signals are corrupted by environmental noise, but knowledge about the noise characteristics is not available. This research is motivated in part by the potential application of speaker recognition technologies on handheld devices or the...

READ MORE

PANEMOTO: network visualization of security situational awareness through passive analysis

Summary

To maintain effective security situational awareness, administrators require tools that present up-to-date information on the state of the network in the form of 'at-a-glance' displays, and that enable rapid assessment and investigation of relevant security concerns through drill-down analysis capability. In this paper, we present a passive network monitoring tool we have developed to address these important requirements, known a Panemoto (PAssive NEtwork MOnitoring TOol). We show how Panemoto enumerates, describes, and characterizes all network components, including devices and connected networks, and delivers an accurate representation of the function of devices and logical connectivity of networks. We provide examples of Panemoto's output in which the network information is presented in two distinct but related formats: as a clickable network diagram (through the use of NetViz), a commercially available graphical display environment) and as statically-linked HTML pages, viewable in any standard web browser. Together, these presentation techniques enable a more complete understanding of the security situation of the network than each does individually.
READ LESS

Summary

To maintain effective security situational awareness, administrators require tools that present up-to-date information on the state of the network in the form of 'at-a-glance' displays, and that enable rapid assessment and investigation of relevant security concerns through drill-down analysis capability. In this paper, we present a passive network monitoring tool...

READ MORE

Benchmarking the MIT LL HPCMP DHPI system

Published in:
Annual High Performance Computer Modernization Program Users Group Conf., 19-21 June 2007.

Summary

The Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) High Performance Computing Modernization Program (HPCMP) Dedicated High Performance Computing Project Investment (DHPI) system was designed to address interactive algorithm development for Department of Defense (DoD) sensor processing systems. The results of the system acceptance test provide a clear quantitative picture of the capabilities of the system. The system acceptance test for MIT LL HPCMP DHPI hardware involved an array of benchmarks that exercised each of the components of the memory hierarchy, the scheduler, and the disk arrays. These benchmarks isolated the components to verify the functionality and performance of the system, and several system issues were discovered and rectified by using these benchmarks. The memory hierarchy was evaluated using the HPC Challenge benchmark suite, which is comprised of the following benchmarks: High Performance Linpack (HPL, also known as Top 500), Fast Fourier Transform (FFT), STREAM, RandomAccess, and Effective Bandwidth. The compute nodes' Random Array of Independent Disks (RAID) arrays were evaluated with the Iozone benchmark. Finally, the scheduler and the reliability of the entire system were tested using both the HPC Challenge suite and the Iozone benchmark. For example executing the HPC Challenge benchmark suite on 416 processors, the system was able to achieve 1.42 TFlops (HPL), 34.7 GFlops (FFT), 1.24 TBytes/sec (STREAM Triad), and 0.16 GUPS (RandomAccess). This paper describes the components of the MIT Lincoln Laboratory HPCMP DHPI system, including its memory hierarchy. We present the HPC Challenge benchmark suite and Iozone benchmark and describe how each of the component benchmarks stress various components of the TX-2500 system. The results of the benchmarks are discussed, and the implications they have on the performance of the system. We conclude with a presentation of the findings.
READ LESS

Summary

The Massachusetts Institute of Technology Lincoln Laboratory (MIT LL) High Performance Computing Modernization Program (HPCMP) Dedicated High Performance Computing Project Investment (DHPI) system was designed to address interactive algorithm development for Department of Defense (DoD) sensor processing systems. The results of the system acceptance test provide a clear quantitative picture...

READ MORE