Publications

Refine Results

(Filters Applied) Clear All

New CCD imagers for adaptive optics wavefront sensors

Published in:
SPIE, Vol. 9148, Adaptive Optics Systems IV, 22 June 2014, 91485O.

Summary

We report on two recently developed charge-coupled devices (CCDs) for adaptive optics wavefront sensing, both designed to provide exceptional sensitivity (low noise and high quantum efficiency) in high-frame-rate low-latency readout applications. The first imager, the CCID75, is a back-illuminated 16-port 160x160 pixel CCD that has been demonstrated to operate at frame rates above 1,300 fps with noise of
READ LESS

Summary

We report on two recently developed charge-coupled devices (CCDs) for adaptive optics wavefront sensing, both designed to provide exceptional sensitivity (low noise and high quantum efficiency) in high-frame-rate low-latency readout applications. The first imager, the CCID75, is a back-illuminated 16-port 160x160 pixel CCD that has been demonstrated to operate at...

READ MORE

Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera

Summary

We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light relfected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.
READ LESS

Summary

We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam...

READ MORE

Simultaneous dynamic pupil coding with on-chip coded aperture temporal imaging

Published in:
SRS 2014: Signal Recovery and Synthesis Conf., 13-17 June 2014.

Summary

We describe a new sensor that combines dynamic pupil coding with a digital readout integrated circuit (DROIC) capable of modulating a scene with a global or per-pixel time-varying, pseudo-random, and duo-binary signal (+1-1,0).
READ LESS

Summary

We describe a new sensor that combines dynamic pupil coding with a digital readout integrated circuit (DROIC) capable of modulating a scene with a global or per-pixel time-varying, pseudo-random, and duo-binary signal (+1-1,0).

READ MORE

Application of the Fornasini-Marchesini first model to data collected on a complex target model

Summary

This work describes the computation of scatterers that lay on the body of a real target which are depicted in radar images. A novelty of the approach is the target echoes collected by the radar are formulated into the first Fornasini-Marchesini (F-M) state space model to compute poles that give rise to the scatterer locations in the two-dimensional (2-D) space. Singular value decomposition carried out on the data provides state matrices that capture the dynamics of the target. Furthermore, eigenvalues computed from the state transition matrices provide range and cross-range locations of the scatterers that exhibit the target silhouette in 2-D space. The maximum likelihood function is formulated with the state matrices to obtain an iterative expression for the Fisher information matrix (FIM) from which posterior Cramer-Rao bounds associated with the various scatterers are derived. Effectiveness of the 2-D state-space technique is tested on static range data collected on a complex conical target model; its accuracy to extract target length is judged and compared with the physical measurements. Validity of the proposed 2-D state-space technique and the Cramer-Rao bounds are demonstrated through data collected on the target model.
READ LESS

Summary

This work describes the computation of scatterers that lay on the body of a real target which are depicted in radar images. A novelty of the approach is the target echoes collected by the radar are formulated into the first Fornasini-Marchesini (F-M) state space model to compute poles that give...

READ MORE

A new multiple choice comprehension test for MT

Published in:
Automatic and Manual Metrics for Operation Translation Evaluation Workshop, 9th Int. Conf. on Language Resources and Evaluation (LREC 2014), 26 May 2014.

Summary

We present results from a new machine translation comprehension test, similar to those developed in previous work (Jones et al., 2007). This test has documents in four conditions: (1) original English documents; (2) human translations of the documents into Arabic; conditions (3) and (4) are machine translations of the Arabic documents into English from two different MT systems. We created two forms of the test: Form A has the original English documents and output from the two Arabic-to-English MT systems. Form B has English, Arabic, and one of the MT system outputs. We administered the comprehension test to three subject types recruited in the greater Boston area: (1) native English speakers with no Arabic skills, (2) Arabic language learners, and (3) Native Arabic speakers who also have English language skills. There were 36 native English speakers, 13 Arabic learners, and 11 native Arabic speakers with English skills. Subjects needed an average of 3.8 hours to complete the test, which had 191 questions and 59 documents. Native English speakers with no Arabic skills saw Form A. Arabic learners and native Arabic speakers saw form B.
READ LESS

Summary

We present results from a new machine translation comprehension test, similar to those developed in previous work (Jones et al., 2007). This test has documents in four conditions: (1) original English documents; (2) human translations of the documents into Arabic; conditions (3) and (4) are machine translations of the Arabic...

READ MORE

Standardized ILR-based and task-based speech-to-speech MT evaluation

Published in:
Automatic and Manual Metrics for Operation Translation Evaluation Workshop, 9th Int. Conf. on Language Resources and Evaluation (LREC 2014), 26 May 2014.

Summary

This paper describes a new method for task-based speech-to-speech machine translation evaluation, in which tasks are defined and assessed according to independent published standards, both for the military tasks performed and for the foreign language skill levels used. We analyze task success rates and automatic MT evaluation scores (BLEU and METEOR) for 220 role-play dialogs. Each role-play team consisted of one native English-speaking soldier role player, one native Pashto-speaking local national role player, and one Pashto/English interpreter. The overall PASS score, averaged over all of the MT dialogs, was 44%. The average PASS rate for HT was 95%, which is important because a PASS requires that the role-players know the tasks. Without a high PASS rate in the HT condition, we could not be sure that the MT condition was not being unfairly penalized. We learned that success rates depended as much on task simplicity as it did upon the translation condition: 67% of simple, base-case scenarios were successfully completed using MT, whereas only 35% of contrasting scenarios with even minor obstacles received passing scores. We observed that MT had the greatest chance of success when the task was simple and the language complexity needs were low.
READ LESS

Summary

This paper describes a new method for task-based speech-to-speech machine translation evaluation, in which tasks are defined and assessed according to independent published standards, both for the military tasks performed and for the foreign language skill levels used. We analyze task success rates and automatic MT evaluation scores (BLEU and...

READ MORE

Wind information requirements for NextGen applications - phase 2 final report - framework refinement and application to four-dimensional trajectory based operations (4D-TBO) and interval management (IM)

Published in:
MIT Lincoln Laboratory Report ATC-418
Topic:

Summary

Accurate wind information is of fundamental importance to some of the critical future air traffic concepts under the FAA's Next Generation Air Transportation System (NextGen) initiative. Concepts involving time elements, such as Four-Dimensional Trajectory Based Operations (4D-TBO) and Interval Management (IM), are especially sensitive to wind information accuracy. There is a growing need to establish appropriate concepts of operation and target performance requirements accounting for wind information accuracy for these types of procedure, and meeting these needs is the purpose of this project. In the first phase of this work, a Wind Information Analysis Framework was developed to help explore the relationship of wind information to NextGen application performance. A refined version of the framework has been developed for the Phase 2 work that highlights the role stakeholders play in defining Air Traffic Control (ATC) scenarios, distinguishes wind scenarios into benign, moderate, severe, and extreme categories, and more clearly identifies what and how wind requirements recommendations are developed from the performance assessment trade-spaces. This report documents how this refined analysis framework has been used in Phase 2 of the work in terms of: -Refined wind information metrics and wind scenario selection process applicable to a broader range of NextGen applications, with particular focus on 4D-TBO and IM. -Expanded and refined studies of 4D-TBO applications with current Flight Management Systems (FMS) (with MITRE collaboration) to identify more accurate trade-spaces using operational FMS capabilities with higher-fidelity aircraft models. -Expansion of the 4D-TBO study using incremental enhancements possible in future FMSs (with Honeywell collaboration), specifically in the area of wind blending algorithms to quantify performance improvement potential from near-term avionics refinements. -Demonstrating the adaptability of the Wind Information Analysis Framework by using it to identify initial wind information needs for IM applications.
READ LESS

Summary

Accurate wind information is of fundamental importance to some of the critical future air traffic concepts under the FAA's Next Generation Air Transportation System (NextGen) initiative. Concepts involving time elements, such as Four-Dimensional Trajectory Based Operations (4D-TBO) and Interval Management (IM), are especially sensitive to wind information accuracy. There is...

READ MORE

Unmanned aircraft sense and avoid radar: surrogate flight testing performance evaluation

Summary

Unmanned aircraft systems (UAS) have proven to have distinct advantages compared to manned aircraft for a variety of tasks. Current airspace regulations require a capability to sense and avoid other aircraft to replace the ability of a pilot to see and avoid other traffic. A prototype phased-array radar was developed and tested to demonstrate a capability to support the sense and avoid (SAA) requirement and to validate radar performance models. Validated radar models enable evaluation of other radar systems in simulation. This paper provides an overview of the unique radar technology, and focuses on radar performance and model validation as demonstrated through a flight testing campaign. Performance results demonstrate that the prototype SAA radar system can provide sufficient accuracy to sense avoid non-cooperative aircraft.
READ LESS

Summary

Unmanned aircraft systems (UAS) have proven to have distinct advantages compared to manned aircraft for a variety of tasks. Current airspace regulations require a capability to sense and avoid other aircraft to replace the ability of a pilot to see and avoid other traffic. A prototype phased-array radar was developed...

READ MORE

Development and use of a comprehensive humanitarian assessment tool in post-earthquake Haiti

Summary

This paper describes a comprehensive humanitarian assessment tool designed and used following the January 2010 Haiti earthquake. The tool was developed under Joint Task Force -- Haiti coordination using indicators of humanitarian needs to support decision making by the United States Government, agencies of the United Nations, and various non-governmental organizations. A set of questions and data collection methodology were developed by a collaborative process involving a broad segment of the Haiti humanitarian relief community and used to conduct surveys in internally displaced person settlements and surrounding communities for a four-month period starting on 15 March 2010. Key considerations in the development of the assessment tool and data collection methodology, representative analysis results, and observations from the operational use of the tool for decision making are reported. The paper concludes with lessons learned and recommendations for design and use of similar tools in the future.
READ LESS

Summary

This paper describes a comprehensive humanitarian assessment tool designed and used following the January 2010 Haiti earthquake. The tool was developed under Joint Task Force -- Haiti coordination using indicators of humanitarian needs to support decision making by the United States Government, agencies of the United Nations, and various non-governmental...

READ MORE

Robust keys from physical unclonable functions

Published in:
Proc. 2014 IEEE Int. Symp. on Hardware-Oriented Security and Trust, HOST, 6-7 May 2014.

Summary

Weak physical unclonable functions (PUFs) can instantiate read-proof hardware tokens (Tuyls et al. 2006, CHES) where benign variation, such as changing temperature, yields a consistent key, but invasive attempts to learn the key destroy it. Previous approaches evaluate security by measuring how much an invasive attack changes the derived key (Pappu et al. 2002, Science). If some attack insufficiently changes the derived key, an expert must redesign the hardware. An unexplored alternative uses software to enhance token response to known physical attacks. Our approach draws on machine learning. We propose a variant of linear discriminant analysis (LDA), called PUF LDA, which reduces noise levels in PUF instances while enhancing changes from known attacks. We compare PUF LDA with standard techniques using an optical coating PUF and the following feature types: raw pixels, fast Fourier transform, short-time Fourier transform, and wavelets. We measure the true positive rate for valid detection at a 0% false positive rate (no mistakes on samples taken after an attack). PUF LDA improves the true positive rate from 50% on average (with a large variance across PUFs) to near 100%. While a well-designed physical process is irreplaceable, PUF LDA enables system designers to improve the PUF reliability-security tradeoff by incorporating attacks without redesigning the hardware token.
READ LESS

Summary

Weak physical unclonable functions (PUFs) can instantiate read-proof hardware tokens (Tuyls et al. 2006, CHES) where benign variation, such as changing temperature, yields a consistent key, but invasive attempts to learn the key destroy it. Previous approaches evaluate security by measuring how much an invasive attack changes the derived key...

READ MORE