Publications

Refine Results

(Filters Applied) Clear All

Towards the next generation operational meteorological radar

Summary

This article summarizes research and risk reduction that will inform acquisition decisions regarding NOAA's future national operational weather radar network. A key alternative being evaluated is polarimetric phased-array radar (PAR). Research indicates PAR can plausibly achieve fast, adaptive volumetric scanning, with associated benefits for severe-weather warning performance. We assess these benefits using storm observations and analyses, observing system simulation experiments, and real radar-data assimilation studies. Changes in the number and/or locations of radars in the future network could improve coverage at low altitude. Analysis of benefits that might be so realized indicates the possibility for additional improvement in severe weather and flash-flood warning performance, with associated reduction in casualties. Simulations are used to evaluate techniques for rapid volumetric scanning and assess data quality characteristics of PAR. Finally, we describe progress in developing methods to compensate for polarimetric variable estimate biases introduced by electronic beam-steering. A research-to-operations (R2O) strategy for the PAR alternative for the WSR-88D replacement network is presented.
READ LESS

Summary

This article summarizes research and risk reduction that will inform acquisition decisions regarding NOAA's future national operational weather radar network. A key alternative being evaluated is polarimetric phased-array radar (PAR). Research indicates PAR can plausibly achieve fast, adaptive volumetric scanning, with associated benefits for severe-weather warning performance. We assess these...

READ MORE

Development of a field artifical intelligence triage tool: Confidence in the prediction of shock, transfusion, and definitive surgical therapy in patients with truncal gunshot wounds

Summary

BACKGROUND: In-field triage tools for trauma patients are limited by availability of information, linear risk classification, and a lack of confidence reporting. We therefore set out to develop and test a machine learning algorithm that can overcome these limitations by accurately and confidently making predictions to support in-field triage in the first hours after traumatic injury. METHODS: Using an American College of Surgeons Trauma Quality Improvement Program-derived database of truncal and junctional gunshot wound (GSW) patients (aged 1~0 years), we trained an information-aware Dirichlet deep neural network (field artificial intelligence triage). Using supervised training, field artificial intelligence triage was trained to predict shock and the need for major hemorrhage control procedures or early massive transfusion (MT) using GSW anatomical locations, vital signs, and patient information available in the field. In parallel, a confidence model was developed to predict the true-dass probability ( scale of 0-1 ), indicating the likelihood that the prediction made was correct, based on the values and interconnectivity of input variables.
READ LESS

Summary

BACKGROUND: In-field triage tools for trauma patients are limited by availability of information, linear risk classification, and a lack of confidence reporting. We therefore set out to develop and test a machine learning algorithm that can overcome these limitations by accurately and confidently making predictions to support in-field triage in...

READ MORE

Practical principle of least privilege for secure embedded systems

Published in:
2021 IEEE 27th Real-Time and Embedded Technology and Applications Symp., RTAS. 18-21 May 2021.

Summary

Many embedded systems have evolved from simple bare-metal control systems to highly complex network-connected systems. These systems increasingly demand rich and feature-full operating-systems (OS) functionalities. Furthermore, the network connectedness offers attack vectors that require stronger security designs. To that end, this paper defines a prototypical RTOS API called Patina that provides services common in featurerich OSes (e.g., Linux) but absent in more trustworthy u-kernel-based systems. Examples of such services include communication channels, timers, event management, and synchronization. Two Patina implementations are presented, one on Composite and the other on seL4, each of which is designed based on the Principle of Least Privilege (PoLP) to increase system security. This paper describes how each of these u-kernels affect the PoLP-based design, as well as discusses security and performance tradeoffs in the two implementations. Results of comprehensive evaluations demonstrate that the performance of the PoLP-based implementation of Patina offers comparable or superior performance to Linux, while offering heightened isolation.
READ LESS

Summary

Many embedded systems have evolved from simple bare-metal control systems to highly complex network-connected systems. These systems increasingly demand rich and feature-full operating-systems (OS) functionalities. Furthermore, the network connectedness offers attack vectors that require stronger security designs. To that end, this paper defines a prototypical RTOS API called Patina that...

READ MORE

Geographic source estimation using airborne plant environmental DNA in dust

Summary

Information obtained from the analysis of dust, particularly biological particles such as pollen, plant parts, and fungal spores, has great utility in forensic geolocation. As an alternative to manual microscopic analysis, we developed a pipeline that utilizes the environmental DNA (eDNA) from plants in dust samples to estimate previous sample location(s). The species of plant-derived eDNA within dust samples were identified using metabarcoding and their geographic distributions were then derived from occurrence records in the USGS Biodiversity in Service of Our Nation (BISON) database. The distributions for all plant species identified in a sample were used to generate a probabilistic estimate of the sample source. With settled dust collected at four U.S. sites over a 15-month period, we demonstrated positive regional geolocation (within 600 km2 of the collection point) with 47.6% (20 of 42) of the samples analyzed. Attribution accuracy and resolution was dependent on the number of plant species identified in a dust sample, which was greatly affected by the season of collection. In dust samples that yielded a minimum of 20 identified plant species, positive regional attribution improved to 66.7% (16 of 24 samples). Using dust samples collected from 31 different U.S. sites, trace plant eDNA provided relevant regional attribution information on provenance in 32.2%. This demonstrated that analysis of plant eDNA in dust can provide an accurate estimate regional provenance within the U.S., and relevant forensic information, for a substantial fraction of samples analyzed.
READ LESS

Summary

Information obtained from the analysis of dust, particularly biological particles such as pollen, plant parts, and fungal spores, has great utility in forensic geolocation. As an alternative to manual microscopic analysis, we developed a pipeline that utilizes the environmental DNA (eDNA) from plants in dust samples to estimate previous sample...

READ MORE

A cybersecurity moonshot

Author:
Published in:
IEEE Secur. Priv., Vol. 19, No. 3, May-June 2021, pp. 8-16.

Summary

Cybersecurity needs radical rethinking to change its current landscape. This article charts a vision for a cybersecurity moonshot based on radical but feasible technologies that can prevent the largest classes of vulnerabilities in modern systems.
READ LESS

Summary

Cybersecurity needs radical rethinking to change its current landscape. This article charts a vision for a cybersecurity moonshot based on radical but feasible technologies that can prevent the largest classes of vulnerabilities in modern systems.

READ MORE

PATHATTACK: attacking shortest paths in complex networks

Summary

Shortest paths in complex networks play key roles in many applications. Examples include routing packets in a computer network, routing traffic on a transportation network, and inferring semantic distances between concepts on the World Wide Web. An adversary with the capability to perturb the graph might make the shortest path between two nodes route traffic through advantageous portions of the graph (e.g., a toll road he owns). In this paper, we introduce the Force Path Cut problem, in which there is a specific route the adversary wants to promote by removing a minimum number of edges in the graph. We show that Force Path Cut is NP-complete, but also that it can be recast as an instance of the Weighted Set Cover problem, enabling the use of approximation algorithms. The size of the universe for the set cover problem is potentially factorial in the number of nodes. To overcome this hurdle, we propose the PATHATTACK algorithm, which via constraint generation considers only a small subset of paths|at most 5% of the number of edges in 99% of our experiments. Across a diverse set of synthetic and real networks, the linear programming formulation of Weighted Set Cover yields the optimal solution in over 98% of cases. We also demonstrate a time/cost tradeoff using two approximation algorithms and greedy baseline methods. This work provides a foundation for addressing similar problems and expands the area of adversarial graph mining beyond recent work on node classification and embedding.
READ LESS

Summary

Shortest paths in complex networks play key roles in many applications. Examples include routing packets in a computer network, routing traffic on a transportation network, and inferring semantic distances between concepts on the World Wide Web. An adversary with the capability to perturb the graph might make the shortest path...

READ MORE

Health-informed policy gradients for multi-agent reinforcement learning

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then extended to a multi-agent variant of the proximal policy optimization algorithm and demonstrated on simple particle environments that have elements of system health, risk-taking, semi-expendable agents, and partial observability. We show significant improvement in learning performance compared to policy gradient methods that do not perform multi-agent credit assignment.
READ LESS

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then...

READ MORE

Principles for evaluation of AI/ML model performance and robustness, revision 1

Summary

The Department of Defense (DoD) has significantly increased its investment in the design, evaluation, and deployment of Artificial Intelligence and Machine Learning (AI/ML) capabilities to address national security needs. While there are numerous AI/ML successes in the academic and commercial sectors, many of these systems have also been shown to be brittle and nonrobust. In a complex and ever-changing national security environment, it is vital that the DoD establish a sound and methodical process to evaluate the performance and robustness of AI/ML models before these new capabilities are deployed to the field. Without an effective evaluation process, the DoD may deploy AI/ML models that are assumed to be effective given limited evaluation metrics but actually have poor performance and robustness on operational data. Poor evaluation practices lead to loss of trust in AI/ML systems by model operators and more frequent--often costly--design updates needed to address the evolving security environment. In contrast, an effective evaluation process can drive the design of more resilient capabilities, ag potential limitations of models before they are deployed, and build operator trust in AI/ML systems. This paper reviews the AI/ML development process, highlights common best practices for AI/ML model evaluation, and makes the following recommendations to DoD evaluators to ensure the deployment of robust AI/ML capabilities for national security needs: -Develop testing datasets with sufficient variation and number of samples to effectively measure the expected performance of the AI/ML model on future (unseen) data once deployed, -Maintain separation between data used for design and evaluation (i.e., the test data is not used to design the AI/ML model or train its parameters) in order to ensure an honest and unbiased assessment of the model's capability, -Evaluate performance given small perturbations and corruptions to data inputs to assess the smoothness of the AI/ML model and identify potential vulnerabilities, and -Evaluate performance on samples from data distributions that are shifted from the assumed distribution that was used to design the AI/ML model to assess how the model may perform on operational data that may differ from the training data. By following the recommendations for evaluation presented in this paper, the DoD can fully take advantage of the AI/ML revolution, delivering robust capabilities that maintain operational feasibility over longer periods of time, and increase warfighter confidence in AI/ML systems.
READ LESS

Summary

The Department of Defense (DoD) has significantly increased its investment in the design, evaluation, and deployment of Artificial Intelligence and Machine Learning (AI/ML) capabilities to address national security needs. While there are numerous AI/ML successes in the academic and commercial sectors, many of these systems have also been shown to...

READ MORE

Mobile capabilities for micro-meteorological predictions: FY20 Homeland Protection and Air Traffic Control Technical Investment Program

Published in:
MIT Lincoln Laboratory Report TIP-146
Topic:

Summary

Existing operational numerical weather forecast systems are geographically too coarse and not sufficiently accurate to adequately support future needs in applications such as Advanced Air Mobility, Unmanned Aerial Systems, and wildfire forecasting. This is especially true with respect to wind forecasts. Principal factors contributing to this are the lack of observation data within the atmospheric boundary layer and numerical forecast models that operate on low-resolution grids. This project endeavored to address both of these issues. Firstly, by development and demonstration of specially equipped fixed-wing drones to collect atmospheric data within the boundary layer, and secondly by creating a high-resolution weather research forecast model executing on the Lincoln Laboratory Supercomputing Center. Some success was achieved in the development and flight testing of the specialized drones. Significant success was achieved in the development of the high-resolution forecasting system and demonstrating the feasibility of ingesting atmospheric observations from small airborne platforms.
READ LESS

Summary

Existing operational numerical weather forecast systems are geographically too coarse and not sufficiently accurate to adequately support future needs in applications such as Advanced Air Mobility, Unmanned Aerial Systems, and wildfire forecasting. This is especially true with respect to wind forecasts. Principal factors contributing to this are the lack of...

READ MORE

Advanced Air Mobility assessment framework: FY20 Homeland Protection and Air Traffic Control Technical Investment Program

Published in:
MIT Lincoln Laboratory Report TIP-145

Summary

Advanced Air Mobility encompasses emerging aviation technologies that transport people and cargo between local, regional, or urban locations that are currently underserved by aviation and other transportation modalities. The disruptive nature of these technologies has pushed industry, academia, and governments to devote significant investments to understand their impact on airspace risk, operational procedures, and passengers. A flexible framework was designed to assess the operational viability of these technologies and the sensitivity to a variety of assumptions. This framework is used to simulate an initial AAM implementation scenario in New York City. This scenario was created by replacing a portion of NYC taxi requests with electric vertical takeoff and landing vehicles. The framework was used to assess the sensitivity of this scenario to a variety of system assumption.
READ LESS

Summary

Advanced Air Mobility encompasses emerging aviation technologies that transport people and cargo between local, regional, or urban locations that are currently underserved by aviation and other transportation modalities. The disruptive nature of these technologies has pushed industry, academia, and governments to devote significant investments to understand their impact on airspace...

READ MORE