Publications

Refine Results

(Filters Applied) Clear All

HARDEN: A high assurance design environment

Summary

Systems resilient to cyber-attacks for mission assurance are difficult to develop, and the means of effectively evaluating them is even harder. We have developed a new architectural design and engineering environment, referred to as HARDEN (High AssuRance Design ENvironment), which supports an agile design methodology used to create secure and resilient systems. This new toolkit facilitates the quantitative analysis of a system's security posture by setting up a systematic approach of securing and analyzing embedded systems. HARDEN promotes the early co-design of functionality and security that now enables the development of mission assured systems.
READ LESS

Summary

Systems resilient to cyber-attacks for mission assurance are difficult to develop, and the means of effectively evaluating them is even harder. We have developed a new architectural design and engineering environment, referred to as HARDEN (High AssuRance Design ENvironment), which supports an agile design methodology used to create secure and...

READ MORE

Understanding Mission-Driven Resiliency Workshop

Summary

MIT Lincoln Laboratory hosted an invitation-only, one-day interdisciplinary workshop entitled
“Understanding Mission-Driven Resiliency” on behalf of the US Air Force, on March 18, 2019 at MIT
Lincoln Laboratory Beaver Works in Cambridge, MA. Participants began to bridge the gap between
government and industry to improve the resiliency of government systems to cyber attacks. The
workshop focused on understanding and defining resiliency from different perspectives and included
five panels devoted to discussing how different industries view and manage resiliency within their
organizations, the sources of resiliency within organizations and software-intensive systems, measuring
resiliency, and building resiliency within an organization or technology stack.
READ LESS

Summary

MIT Lincoln Laboratory hosted an invitation-only, one-day interdisciplinary workshop entitled
“Understanding Mission-Driven Resiliency” on behalf of the US Air Force, on March 18, 2019 at MIT
Lincoln Laboratory Beaver Works in Cambridge, MA. Participants began to bridge the gap between
government and industry to improve the resiliency of government systems...

READ MORE

Discovering the smallest observed near-earth objects with the space surveillance telescope

Summary

The Space Surveillance Telescope (SST) is an advanced optical sensor designed and tested by MIT Lincoln Laboratory for the Defense Advanced Research Projects Agency (DARPA), which is currently in the process of being integrated into the Space Surveillance Network. By operating the telescope in a manner normally intended for the discovery of small, artificial space objects, SST is serendipitously sensitive to the detection of very small asteroids as they traverse close to the Earth, passing rapidly through SST's search volume. This mode of operation stands in contrast to the standard approach for the search and discovery of asteroids and near-Earth objects (NEOs), in which longer revisit times restrict survey sensitivities to objects moving no faster than about 20 degrees/day. From data collected during SST's observation runs in New Mexico, we detail the discovery of 92 new candidate objects in heliocentric orbit whose absolute magnitudes range from H=26.4 to 35.9 (approximately 18-m to 25-cm in size). Some of these discoveries represent the smallest natural objects ever observed in orbit. We compare the candidate objects with bolide observations.
READ LESS

Summary

The Space Surveillance Telescope (SST) is an advanced optical sensor designed and tested by MIT Lincoln Laboratory for the Defense Advanced Research Projects Agency (DARPA), which is currently in the process of being integrated into the Space Surveillance Network. By operating the telescope in a manner normally intended for the...

READ MORE

Weather radar network benefit model for tornadoes

Author:
Published in:
J. Appl. Meteor. Climatol., 22 April 2019, doi:10.1175/JAMC-D-18-0205.1.

Summary

A monetized tornado benefit model is developed for arbitrary weather radar network configurations. Geospatial regression analyses indicate that improvement of two key radar parameters--fraction of vertical space observed and cross-range horizontal resolution--lead to better tornado warning performance as characterized by tornado detection probability and false alarm ratio. Previous experimental results showing faster volume scan rates yielding greater warning performance are also incorporated into the model. Enhanced tornado warning performance, in turn, reduces casualty rates. In addition, lower false alarm ratios save cost by cutting down on work and personal time lost while taking shelter. The model is run on the existing contiguous United States weather radar network as well as hypothetical future configurations. Results show that the current radars provide a tornado-based benefit of ~$490M per year. The remaining benefit pool is about $260M per year that is roughly split evenly between coverage- and rapid-scanning-related gaps.
READ LESS

Summary

A monetized tornado benefit model is developed for arbitrary weather radar network configurations. Geospatial regression analyses indicate that improvement of two key radar parameters--fraction of vertical space observed and cross-range horizontal resolution--lead to better tornado warning performance as characterized by tornado detection probability and false alarm ratio. Previous experimental results...

READ MORE

FastDAWG: improving data migration in the BigDAWG polystore system

Published in:
Poly 2018/DMAH 2018, LNCS 11470, 2019, pp. 3–15.

Summary

The problem of data integration has been around for decades, yet a satisfactory solution has not yet emerged. A new type of system called a polystore has surfaced to partially address the integration problem. Based on experience with our own polystore called Big-DAWG, we identify three major roadblocks to an acceptable commercial solution. We offer a new architecture inspired by these three problems that trades some generality for usability. This architecture also exploits modern hardware (i.e., high-speed networks and RDMA) to gain performance. The paper concludes with some promising experimental results.
READ LESS

Summary

The problem of data integration has been around for decades, yet a satisfactory solution has not yet emerged. A new type of system called a polystore has surfaced to partially address the integration problem. Based on experience with our own polystore called Big-DAWG, we identify three major roadblocks to an...

READ MORE

Scaling big data platform for big data pipeline

Published in:
Submitted to Northeast Database Day, NEBD 2020, https://arxiv.org/abs/1902.03948

Summary

Monitoring and Managing High Performance Computing (HPC) systems and environments generate an ever growing amount of data. Making sense of this data and generating a platform where the data can be visualized for system administrators and management to proactively identify system failures or understand the state of the system requires the platform to be as efficient and scalable as the underlying database tools used to store and analyze the data. In this paper we will show how we leverage Accumulo, d4m, and Unity to generate a 3D visualization platform to monitor and manage the Lincoln Laboratory Supercomputer systems and how we have had to retool our approach to scale with our systems.
READ LESS

Summary

Monitoring and Managing High Performance Computing (HPC) systems and environments generate an ever growing amount of data. Making sense of this data and generating a platform where the data can be visualized for system administrators and management to proactively identify system failures or understand the state of the system requires...

READ MORE

Guidelines for secure small satellite design and implementation: FY18 Cyber Security Line-Supported Program

Summary

We are on the cusp of a computational renaissance in space, and we should not bring past terrestrial missteps along. Commercial off-the-shelf (COTS) processors -- much more powerful than traditional rad-hard devices -- are increasingly used in a variety of low-altitude, short-duration CubeSat class missions. With this new-found headroom, the incessant drumbeat of "faster, cheaper, faster, cheaper" leads a familiar march towards Linux and a menagerie of existing software packages, each more bloated and challenging to secure than the last. Lincoln Laboratory has started a pilot effort to design and prototype an exemplar secure satellite processing platform, initially geared toward CubeSats but with a clear path to larger missions and future high performance rad-hard processors. The goal is to provide engineers a secure "grab-and-go" architecture that doesn't unduly hamstring aggressive build timelines yet still provides a foundation of security that can serve adopting systems well, as well as future systems derived from them. This document lays out the problem space for cybersecurity in this domain, derives design guidelines for future secure space systems, proposes an exemplar architecture that implements the guidelines, and provides a solid starting point for near-term and future satellite processing.
READ LESS

Summary

We are on the cusp of a computational renaissance in space, and we should not bring past terrestrial missteps along. Commercial off-the-shelf (COTS) processors -- much more powerful than traditional rad-hard devices -- are increasingly used in a variety of low-altitude, short-duration CubeSat class missions. With this new-found headroom, the...

READ MORE

A billion updates per second using 30,000 hierarchical in-memory D4M databases

Summary

Analyzing large scale networks requires high performance streaming updates of graph representations of these data. Associative arrays are mathematical objects combining properties of spreadsheets, databases, matrices, and graphs, and are well-suited for representing and analyzing streaming network data. The Dynamic Distributed Dimensional Data Model (D4M) library implements associative arrays in a variety of languages (Python, Julia, and Matlab/Octave) and provides a lightweight in-memory database. Associative arrays are designed for block updates. Streaming updates to a large associative array requires a hierarchical implementation to optimize the performance of the memory hierarchy. Running 34,000 instances of a hierarchical D4M associative arrays on 1,100 server nodes on the MIT SuperCloud achieved a sustained update rate of 1,900,000,000 updates per second. This capability allows the MIT SuperCloud to analyze extremely large streaming network data sets.
READ LESS

Summary

Analyzing large scale networks requires high performance streaming updates of graph representations of these data. Associative arrays are mathematical objects combining properties of spreadsheets, databases, matrices, and graphs, and are well-suited for representing and analyzing streaming network data. The Dynamic Distributed Dimensional Data Model (D4M) library implements associative arrays in...

READ MORE

Shining light on thermophysical Near-Earth Asteroid modeling efforts

Published in:
1st NEO and Debris Detection Conf., 22-24 January 2019.

Summary

Comprehensive thermophysical analyses of Near-Earth Asteroids (NEAs) provide important information about their physical properties, including visible albedo, diameter, composition, and thermal inertia. These details are integral to defining asteroid taxonomy and understanding how these objects interact with the solar system. Since infrared (IR) asteroid observations are not widely available, thermophysical modeling techniques have become valuable in simulating properties of different asteroid types. Several basic models that assume a spherical asteroid shape have been used extensively within the research community. As part of a program focused on developing a simulation of space-based IR sensors for asteroid search, the Near-Earth Asteroid Model (NEATM) developed by Harris, A. in 1998, was selected. This review provides a full derivation of the formulae behind NEATM, including the spectral flux density equation, consideration of the solar phase angle, and the geometry of the asteroid, Earth, and Sun system. It describes how to implement the model in software and explores the use of an ellipsoidal asteroid shape. It also applies the model to several asteroids observed by NASA's Near-Earth Object Wide-field Survey Explorer (NEOWISE) and compares the performance of the model to the observations.
READ LESS

Summary

Comprehensive thermophysical analyses of Near-Earth Asteroids (NEAs) provide important information about their physical properties, including visible albedo, diameter, composition, and thermal inertia. These details are integral to defining asteroid taxonomy and understanding how these objects interact with the solar system. Since infrared (IR) asteroid observations are not widely available, thermophysical...

READ MORE

Secure input validation in Rust with parsing-expression grammars

Published in:
Thesis (M.E.)--Massachusetts Institute of Technology, 2019.

Summary

Accepting input from the outside world is one of the most dangerous things a system can do. Since type information is lost across system boundaries, systems must perform type-specific input handling routines to recover this information. Adversaries can carefully craft input data to exploit any bugs or vulnerabilities in these routines, thereby causing dangerous memory errors. Including input validation routines in kernels is especially risky. Sensitive memory contents and powerful privileges make kernels a preferred target of attackers. Furthermore, the fact that kernels must process user input, network data, as well as input from a wide array of peripheral devices means that including such input validation schemes is unavoidable. In this thesis we present Automatic Validation of Input Data (AVID), which helps solve the issue of input validation within kernels by automatically generating parser implementations for developer-defined structs. AVID leverages not only the unambiguity guarantees of parsing expression grammars but also the type safety guarantees of Rust. We show how AVID can be used to resolve a manufactured vulnerability in Tock, an operating system written in Rust for embedded systems. Using Rust’s procedural macro system, AVID generates parser implementations at compile time based on existing Rust struct definitions. AVID exposes a simple and convenient parser API that is able to validate input and then instantiate structs from the validated input. AVID's simple interface makes it easy for developers to use and to integrate with existing codebases.
READ LESS

Summary

Accepting input from the outside world is one of the most dangerous things a system can do. Since type information is lost across system boundaries, systems must perform type-specific input handling routines to recover this information. Adversaries can carefully craft input data to exploit any bugs or vulnerabilities in these...

READ MORE