Publications

Refine Results

(Filters Applied) Clear All

Air traffic decision analysis during convective weather events in arrival airspace

Published in:
12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conf. and 14th AIAA/ISSM, 17-19 September 2012.

Summary

Decision making during convective weather events in the terminal area is shared among pilots and air traffic management, where uninformed decisions can result in wide-spread cascading delays with high-level impacts. Future traffic management systems capable of predicting terminal impacts will mitigate these unnecessary delays; however in order to realize this vision, it is important to understand the decision mechanisms behind convective weather avoidance. This paper utilizes an arrival adaptation of the Convective Weather Avoidance Model (CWAM) to investigate the catalysts for arrival traffic management decision making. The analysis is broken down by category of terminal airspace structure in addition to the type of decision. The results show that pilot behavior in convective weather is heavily dependent on the terminal airspace structure. In addition, pilot and air traffic management decisions in convective weather can be discriminated with large-scale weather features.
READ LESS

Summary

Decision making during convective weather events in the terminal area is shared among pilots and air traffic management, where uninformed decisions can result in wide-spread cascading delays with high-level impacts. Future traffic management systems capable of predicting terminal impacts will mitigate these unnecessary delays; however in order to realize this...

READ MORE

Measurements of the 1030 and 1090 MHz environments at JFK International Airport

Summary

Measurements of signals in the 1030 and 1090 MHz frequency bands have been made by MIT Lincoln Laboratory in the last several years, previously in the Boston area and most recently in April 2011, at JFK International Airport near New York City. This JFK measurement activity was performed as a part of the Lincoln Laboratory Traffic Alert and Collision Avoidance System (TCAS) work for the Federal Aviation Administration (FAA) and is the subject of this report. This report includes: 1) Overall characteristics of the 1030/1090 MHz environments, 2) Analysis of the TCAS air-to-air coordination process, 3) Examination of 1090 MHz Extended Squitter transmissions for use in TCAS, 4) Assessment of the extent and impact of TCAS operation on the airport surface.
READ LESS

Summary

Measurements of signals in the 1030 and 1090 MHz frequency bands have been made by MIT Lincoln Laboratory in the last several years, previously in the Boston area and most recently in April 2011, at JFK International Airport near New York City. This JFK measurement activity was performed as a...

READ MORE

High dynamic range suppressed-bias microwave photonic links using unamplified semiconductor laser source

Published in:
AVFOP 2012: IEEE Avionics, Fiber-Optics and Photonics Tech. Conf., 11-13 September 2012, pp. 28-9.

Summary

Microwave photonic (MWP) links with a low noise figure and high dynamic range are required for antenna remoting, radio-over-fiber (RoF), and other advanced applications. MWP links have recently been demonstrated with noise figures approaching 3 dB, without any electrical preamplification, by using low-noise high-power laser sources in conjunction with efficient optical intensity modulators and high-power photodetectors. An alternate approach to noise figure reduction, suitable for sub-octave links, is based on using a high-power laser source and shifting the bias point of an external optical intensity modulator to reduce the average photocurrent and suppress excess link noise. Here, we report the performance of a novel slab-coupled optical waveguide external-cavity laser (SCOWECL) in a suppressed bias MWP link. We compare the performance of this link with a suppressed-bias link using a source comprising a commercial-off-the-shelf (COTS) laser and erbium-doped fiber amplifier (EDFA) and show that MWP links built using SCOW-based emitter technology offer superior performance due to the small-form factor, high-efficiency, low-noise, and high power laser source.
READ LESS

Summary

Microwave photonic (MWP) links with a low noise figure and high dynamic range are required for antenna remoting, radio-over-fiber (RoF), and other advanced applications. MWP links have recently been demonstrated with noise figures approaching 3 dB, without any electrical preamplification, by using low-noise high-power laser sources in conjunction with efficient...

READ MORE

Large scale network situational awareness via 3D gaming technology

Author:
Published in:
HPEC 2012: IEEE Conf. on High Performance Extreme Computing, 10-12 September 2012.

Summary

Obtaining situational awareness of network activity across an enterprise presents unique visualization challenges. IT analysts are required to quickly gather and correlate large volumes of disparate data to identify the existence of anomalous behavior. This paper will show how the MIT Lincoln Laboratory LLGrid Team has approached obtaining network situational awareness utilizing the Unity 3D video game engine. We have developed a 3D environment of the physical plant in the format of a networked multi player First Person Shooter (FPS) to demonstrate a virtual depiction of the current state of the network and the machines operating on the network. Within the game or virtual world an analyst or player can gather critical information on all network assets as well as perform physical system actions on machines in question. 3D gaming technology provides tools to create an environment that is both visually familiar to the player as well display immense amounts of system data in a meaningful and easy to absorb format. Our prototype system was able to monitor and display 5000 assets in ~10% of the time of our network time window.
READ LESS

Summary

Obtaining situational awareness of network activity across an enterprise presents unique visualization challenges. IT analysts are required to quickly gather and correlate large volumes of disparate data to identify the existence of anomalous behavior. This paper will show how the MIT Lincoln Laboratory LLGrid Team has approached obtaining network situational...

READ MORE

Benchmarking parallel eigen decomposition for residuals analysis of very large graphs

Published in:
HPEC 2012: IEEE Conf. on High Performance Extreme Computing, 10-12 September 2012.

Summary

Graph analysis is used in many domains, from the social sciences to physics and engineering. The computational driver for one important class of graph analysis algorithms is the computation of leading eigenvectors of matrix representations of a graph. This paper explores the computational implications of performing an eigen decomposition of a directed graph's symmetrized modularity matrix using commodity cluster hardware and freely available eigensolver software, for graphs with 1 million to 1 billion vertices, and 8 million to 8 billion edges. Working with graphs of these sizes, parallel eigensolvers are of particular interest. Our results suggest that graph analysis approaches based on eigen space analysis of graph residuals are feasible even for graphs of these sizes.
READ LESS

Summary

Graph analysis is used in many domains, from the social sciences to physics and engineering. The computational driver for one important class of graph analysis algorithms is the computation of leading eigenvectors of matrix representations of a graph. This paper explores the computational implications of performing an eigen decomposition of...

READ MORE

Scalable cryptographic authentication for high performance computing

Summary

High performance computing (HPC) uses supercomputers and computing clusters to solve large computational problems. Frequently HPC resources are shared systems and access to restricted data sets or resources must be authenticated. These authentication needs can take multiple forms, both internal and external to the HPC cluster. A computational stack that uses web services among nodes in the HPC may need to perform authentication between nodes of the same job or a job may need to reach out to data sources outside the HPC. Traditional authentication mechanisms such as passwords or digital certificates encounter issues with the distributed and potentially disconnected nature of HPC systems. Distributing and storing plain-text passwords or cryptographic keys among nodes in a HPC system without special protection is a poor security practice. Systems that reach back to the user's terminal for access to the authenticator are possible, but only in fully interactive supercomputing where connectivity to the user's terminal can be guaranteed. Point solutions can be enabled for these use cases, such as software-based role or self-signed certificates, however they require significant expertise in digital certificates to configure. A more general solution is called for that is both secure and easy to use. This paper presents an overview of a solution implemented on the interactive, on-demand LLGrid computing system at MIT Lincoln Laboratory and its use to solve one such authentication problem.
READ LESS

Summary

High performance computing (HPC) uses supercomputers and computing clusters to solve large computational problems. Frequently HPC resources are shared systems and access to restricted data sets or resources must be authenticated. These authentication needs can take multiple forms, both internal and external to the HPC cluster. A computational stack that...

READ MORE

HPC-VMs: virtual machines in high performance computing systems

Published in:
HPEC 2012: IEEE Conf. on High Performance Extreme Computing, 10-12 September 2012.

Summary

The concept of virtual machines dates back to the 1960s. Both IBM and MIT developed operating system features that enabled user and peripheral time sharing, the underpinnings of which were early virtual machines. Modern virtual machines present a translation layer of system devices between a guest operating system and the host operating system executing on a computer system, while isolating each of the guest operating systems from each other. In the past several years, enterprise computing has embraced virtual machines to deploy a wide variety of capabilities from business management systems to email server farms. Those who have adopted virtual deployment environments have capitalized on a variety of advantages including server consolidation, service migration, and higher service reliability. But they have also ended up with some challenges including a sacrifice in performance and more complex system management. Some of these advantages and challenges also apply to HPC in virtualized environments. In this paper, we analyze the effectiveness of using virtual machines in a high performance computing (HPC) environment. We propose adding some virtual machine capability to already robust HPC environments for specific scenarios where the productivity gained outweighs the performance lost for using virtual machines. Finally, we discuss an implementation of augmenting virtual machines into the software stack of a HPC cluster, and we analyze the affect on job launch time of this implementation.
READ LESS

Summary

The concept of virtual machines dates back to the 1960s. Both IBM and MIT developed operating system features that enabled user and peripheral time sharing, the underpinnings of which were early virtual machines. Modern virtual machines present a translation layer of system devices between a guest operating system and the...

READ MORE

Driving big data with big compute

Summary

Big Data (as embodied by Hadoop clusters) and Big Compute (as embodied by MPI clusters) provide unique capabilities for storing and processing large volumes of data. Hadoop clusters make distributed computing readily accessible to the Java community and MPI clusters provide high parallel efficiency for compute intensive workloads. Bringing the big data and big compute communities together is an active area of research. The LLGrid team has developed and deployed a number of technologies that aim to provide the best of both worlds. LLGrid MapReduce allows the map/reduce parallel programming model to be used quickly and efficiently in any language on any compute cluster. D4M (Dynamic Distributed Dimensional Data Model) provided a high level distributed arrays interface to the Apache Accumulo database. The accessibility of these technologies is assessed by measuring the effort to use these tools and is typically a few lines of code. The performance is assessed by measuring the insert rate into the Accumulo database. Using these tools a database insert rate of 4M inserts/second has been achieved on an 8 node cluster.
READ LESS

Summary

Big Data (as embodied by Hadoop clusters) and Big Compute (as embodied by MPI clusters) provide unique capabilities for storing and processing large volumes of data. Hadoop clusters make distributed computing readily accessible to the Java community and MPI clusters provide high parallel efficiency for compute intensive workloads. Bringing the...

READ MORE

Cluster-based 3D reconstruction of aerial video

Author:
Published in:
HPEC 2012: IEEE Conf. on High Performance Extreme Computing, 10-12 September 2012.

Summary

Large-scale 3D scene reconstruction using Structure from Motion (SfM) continues to be very computationally challenging despite much active research in the area. We propose an efficient, scalable processing chain designed for cluster computing and suitable for use on aerial video. The sparse bundle adjustment step, which is iterative and difficult to parallelize, is accomplished by partitioning the input image set, generating independent point clouds in parallel, and then fusing the clouds and combining duplicate points. We compare this processing chain to a leading parallel SfM implementation, which exploits fine-grained parallelism in various matrix operations and is not designed to scale beyond a multi-core workstation with GPU. We show our cluster-based approach offers significant improvement in scalability and runtime while producing comparable point cloud density and more accurate point location estimates.
READ LESS

Summary

Large-scale 3D scene reconstruction using Structure from Motion (SfM) continues to be very computationally challenging despite much active research in the area. We propose an efficient, scalable processing chain designed for cluster computing and suitable for use on aerial video. The sparse bundle adjustment step, which is iterative and difficult...

READ MORE

Scalable cryptographic authentication for high performance computing

Summary

High performance computing (HPC) uses supercomputers and computing clusters to solve large computational problems. Frequently HPC resources are shared systems and access to restricted data sets or resources must be authenticated. These authentication needs can take multiple forms, both internal and external to the HPC cluster. A computational stack that uses web services among nodes in the HPC may need to perform authentication between nodes of the same job or a job may need to reach out to data sources outside the HPC. Traditional authentication mechanisms such as passwords or digital certificates encounter issues with the distributed and potentially disconnected nature of HPC systems. Distributing and storing plain-text passwords or cryptographic keys among nodes in a HPC system without special protection is a poor security practice. Systems that reach back to the user's terminal for access to the authenticator are possible, but only in fully interactive supercomputing where connectivity to the user's terminal can be guaranteed. Point solutions can be enabled for these use cases, such as software-based role or self-signed certificates, however they require significant expertise in digital certificates to configure. A more general solution is called for that is both secure and easy to use. This paper presents an overview of a solution implemented on the interactive, on-demand LLGrid computing system at MIT Lincoln Laboratory and its use to solve one such authentication problem.
READ LESS

Summary

High performance computing (HPC) uses supercomputers and computing clusters to solve large computational problems. Frequently HPC resources are shared systems and access to restricted data sets or resources must be authenticated. These authentication needs can take multiple forms, both internal and external to the HPC cluster. A computational stack that...

READ MORE