Publications

Refine Results

(Filters Applied) Clear All

Gigahertz (GHz) hard X-ray imaging using fast scintillators

Summary

Gigahertz (GHz) imaging technology will be needed at high-luminosity X-ray and charged particle sources. It is plausible to combine fast scintillators with the latest picosecond detectors and GHz electronics for multi-frame hard X-ray imaging and achieve an inter-frame time of elss than 10 ns. The time responses and light yield of LYSO, LaBr3, BaF2 and ZnO are measured using an MCP-PMT detector. Zinc Oxide (ZnO) is an attractive material for fast hard X-ray imaging based on GEANT4 simulations and previous studies, but the measured light yield from the samples is much lower than expected.
READ LESS

Summary

Gigahertz (GHz) imaging technology will be needed at high-luminosity X-ray and charged particle sources. It is plausible to combine fast scintillators with the latest picosecond detectors and GHz electronics for multi-frame hard X-ray imaging and achieve an inter-frame time of elss than 10 ns. The time responses and light yield...

READ MORE

Wind-shear detection performance analysis for MPAR risk reduction

Published in:
36th Conf. on Radar Meteorology, 16 September 2013.

Summary

Multifunction phased array radars (MPARs) of the future that may replace the current terminal wind-shear detection systems will need to meet the Federal Aviation Administration's (FAA) detection requirements. Detection performance issues related to on-airport siting of MPAR, its broader antenna beamwidth relative to the Terminal Doppler Weather Radar (TDWR), and the change in operational frequency from C band to S band are analyzed. Results from the 2012 MPAR Wind-Shear Experiment are presented, with microburst and gust-front detection statistics for the Oklahoma City TDWR and the National Weather Radar Testbed (NWRT) phased array radar, which are located 6 km apart. The NWRT has sensitivity and beamwidth similar to a conceptual terminal MPAR (TMPAR), which is a scaled-down version of a full-size MPAR. The micro-burst results show both the TDWR probability of detec-tion (POD) and the estimated NWRT POD exceeding the 90% requirement. For gust fronts, however, the overall estimated NWRT POD was more than 10% lower than the TDWR POD. NWRT data are also used to demonstrate that rapid-scan phased array radar has the potential to enhance microburst prediction capability.
READ LESS

Summary

Multifunction phased array radars (MPARs) of the future that may replace the current terminal wind-shear detection systems will need to meet the Federal Aviation Administration's (FAA) detection requirements. Detection performance issues related to on-airport siting of MPAR, its broader antenna beamwidth relative to the Terminal Doppler Weather Radar (TDWR), and...

READ MORE

Validation of NEXRAD radar differential reflectivity in snowstorms with airborne microphysical measurements: evidence for hexagonal flat plate crystals

Summary

This study is concerned with the use of cloud microphysical aircraft measurements (the Convair 580) to verify the origin of differential reflectivity (ZDR) measured with a ground-based radar (the WSR-88D KBUF radar in Buffalo, New York). The underlying goal is to make use of the radar measurements to infer the presence or absence of supercooled water, which may pose an icing hazard to aircraft. The context of these measurements is the investment by the Federal Aviation Administration in the use of NEXRAD polarimetric radar and is addressed in the companion paper by Smalley et al. (2013, this Conference). The highlight of the measurements on February 28, 2013 was the finding of sustained populations of hexagonal flat plate crystals over a large area northwest of the KBUF radar, in conditions of dilute and intermittent supercooled water concentration. Some background discussion is in order prior to the discussion of the aircraft/radar observations that form the main body of this study. The anisotropy of hydrometeors, the role of humidity and temperature in crystal shape, and the common presence of hexagonal flat plate crystals in the laboratory cold box experiment are all discussed in turn.
READ LESS

Summary

This study is concerned with the use of cloud microphysical aircraft measurements (the Convair 580) to verify the origin of differential reflectivity (ZDR) measured with a ground-based radar (the WSR-88D KBUF radar in Buffalo, New York). The underlying goal is to make use of the radar measurements to infer the...

READ MORE

D4M 2.0 Schema: a general purpose high performance schema for the Accumulo database

Summary

Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using novel schemas. The Dynamic Distributed Dimensional Data Model (D4M) [http://www.mit.edu/~kepner/D4M] provides a uniform mathematical framework based on associative arrays that encompasses both traditional (i.e., SQL) and non-traditional databases. For non-traditional databases D4M naturally leads to a general purpose schema that can be used to fully index and rapidly query every unique string in a dataset. The D4M 2.0 Schema has been applied with little or no customization to cyber, bioinformatics, scientific citation, free text, and social media data. The D4M 2.0 Schema is simple, requires minimal parsing, and achieves the highest published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are independent of the D4M interface. Any interface to Accumulo can achieve these benefits by using the D4M 2.0 Schema.
READ LESS

Summary

Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using...

READ MORE

LLSuperCloud: sharing HPC systems for diverse rapid prototyping

Summary

The supercomputing and enterprise computing arenas come from very different lineages. However, the advent of commodity computing servers has brought the two arenas closer than they have ever been. Within enterprise computing, commodity computing servers have resulted in the development of a wide range of new cloud capabilities: elastic computing, virtualization, and data hosting. Similarly, the supercomputing community has developed new capabilities in heterogeneous, massively parallel hardware and software. Merging the benefits of enterprise clouds and supercomputing has been a challenging goal. Significant effort has been expended in trying to deploy supercomputing capabilities on cloud computing systems. These efforts have resulted in unreliable, low performance solutions, which requires enormous expertise to maintain. LLSuperCloud provides a novel solution to the problem of merging enterprise cloud and supercomputing technology. More specifically LLSuperCloud reverses the traditional paradigm of attempting to deploy supercomputing capabilities on a cloud and instead deploys cloud capabilities on a supercomputer. The result is a system that can handle heterogeneous, massively parallel workloads while also providing high performance elastic computing, virtualization, and databases. The benefits of LLSuperCloud are highlighted using a mixed workload of C MPI, parallel MATLAB, Java, databases, and virtualized web services.
READ LESS

Summary

The supercomputing and enterprise computing arenas come from very different lineages. However, the advent of commodity computing servers has brought the two arenas closer than they have ever been. Within enterprise computing, commodity computing servers have resulted in the development of a wide range of new cloud capabilities: elastic computing...

READ MORE

D4M 2.0 Schema: a general purpose high performance schema for the Accumulo database

Summary

Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using novel schemas. The Dynamic Distributed Dimensional Data Model (D4M) [http://www.mit.edu/~kepner/D4M] provides a uniform mathematical framework based on associative arrays that encompasses both traditional (i.e., SQL) and non-traditional databases. For non-traditional databases D4M naturally leads to a general purpose schema that can be used to fully index and rapidly query every unique string in a dataset. The D4M 2.0 Schema has been applied with little or no customization to cyber, bioinformatics, scientific citation, free text, and social media data. The D4M 2.0 Schema is simple, requires minimal parsing, and achieves the highest published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are independent of the D4M interface. Any interface to Accumulo can achieve these benefits by using the D4M 2.0 Schema.
READ LESS

Summary

Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using...

READ MORE

Slab-coupled optical waveguide (SCOW) devices and photonic integrated circuits (PICs)

Summary

We review recent advances in the development of slab-coupled optical waveguide (SCOW) devices, progress toward a flexible photonic integration platform containing both conventional high-confinement and SCOW ultra-low confinement devices, and applications of this technology.
READ LESS

Summary

We review recent advances in the development of slab-coupled optical waveguide (SCOW) devices, progress toward a flexible photonic integration platform containing both conventional high-confinement and SCOW ultra-low confinement devices, and applications of this technology.

READ MORE

Multifunction Phased Array Radar (MPAR): achieving Next Generation Surveillance and Weather Radar Capability

Published in:
J. Air Traffic Control, Vol. 55, No. 3, Fall 2013, pp. 40-7.

Summary

Within DOT, the FAA has initiated an effort known as the NextGen Surveillance and Weather Radar Capability (NSWRC) to analyze the need for the next generation radar replacement and assess viable implementation alternatives. One concept under analysis is multifunction radar using phased-array technology -- Multifunction Phased Array Radar or MPAR.
READ LESS

Summary

Within DOT, the FAA has initiated an effort known as the NextGen Surveillance and Weather Radar Capability (NSWRC) to analyze the need for the next generation radar replacement and assess viable implementation alternatives. One concept under analysis is multifunction radar using phased-array technology -- Multifunction Phased Array Radar or MPAR.

READ MORE

Pixel-processing imager development for directed energy applications

Summary

Tactical high-energy laser (HEL) systems face a range of imaging-related challenges in wavefront sensing, acquiring and tracking targets, selecting the HEL aimpoint, and assessing lethality. Accomplishing these functions in a timely fashion may be limited by competing requirements on total field of regard, target resolution, signal to noise, and focal plane readout bandwidth. In this paper, we explore the applicability of an emerging pixel-processing imager (PPI) technology to these challenges. The on-focal-plane signal processing capabilities of the MIT Lincoln Laboratory PPI technology have recently been extended in support of directed energy applications. We describe this work as well as early results from a new PPI-based short-wave-infrared focal plane readout capable of supporting diverse applications such as low-latency Shack-Hartmann wavefront sensing, centroid computation, and Fitts correlation tracking.
READ LESS

Summary

Tactical high-energy laser (HEL) systems face a range of imaging-related challenges in wavefront sensing, acquiring and tracking targets, selecting the HEL aimpoint, and assessing lethality. Accomplishing these functions in a timely fashion may be limited by competing requirements on total field of regard, target resolution, signal to noise, and focal...

READ MORE

Very large graphs for information extraction (VLG) - summary of first-year proof-of-concept study

Summary

In numerous application domains relevant to the Department of Defense and the Intelligence Community, data of interest take the form of entities and the relationships between them, and these data are commonly represented as graphs. Under the Very Large Graphs for Information Extraction effort--a one-year proof-of-concept study--MIT LL developed novel techniques for anomalous subgraph detection, building on tools in the signal processing research literature. This report documents the technical results of this effort. Two datasets--a snapshot of Thompson Reuters? Web of Science database and a stream of web proxy logs--were parsed, and graphs were constructed from the raw data. From the phenomena in these datasets, several algorithms were developed to model the dynamic graph behavior, including a preferential attachment mechanism with memory, a streaming filter to model a graph as a weighted average of its past connections, and a generalized linear model for graphs where connection probabilities are determined by additional side information or metadata. A set of metrics was also constructed to facilitate comparison of techniques. The study culminated in a demonstration of the algorithms on the datasets of interest, in addition to simulated data. Performance in terms of detection, estimation, and computational burden was measured according to the metrics. Among the highlights of this demonstration were the detection of emerging coauthor clusters in the Web of Science data, detection of botnet activity in the web proxy data after 15 minutes (which took 10 days to detect using state-of-the-practice techniques), and demonstration of the core algorithm on a simulated 1-billion-vertex graph using a commodity computing cluster.
READ LESS

Summary

In numerous application domains relevant to the Department of Defense and the Intelligence Community, data of interest take the form of entities and the relationships between them, and these data are commonly represented as graphs. Under the Very Large Graphs for Information Extraction effort--a one-year proof-of-concept study--MIT LL developed novel...

READ MORE