Publications

Refine Results

(Filters Applied) Clear All

TCAS II and ACAS Xa traffic and resolution advisories during interval management paired approach operations

Published in:
2020 AIAA/IEEE 39th Digital Avionics Systems Conf., DASC, 11-15 October 2020.

Summary

Interval Management (IM) is an FAA Next-Gen Automatic Dependent Surveillance – Broadcast (ADS-B) In application designed to decrease the variability in spacing between aircraft, thereby increasing the efficiency of the National Airspace System (NAS). One application within IM is Paired Approach (PA). In a PA operation, the lead aircraft and trail aircraft are both established on final approach to dependent parallel runways with runway centerline spacing less than 2500 feet. The trail aircraft follows speed guidance from the IM Avionics to achieve and maintain a desired spacing behind the lead aircraft. PA operations are expected to require a new separation standard that allows the aircraft to be spaced more closely than current dependent parallel separation standards. The behavior of an airborne collision avoidance system, such as TCAS II or ACAS Xa, must be considered during a new operation such as PA, because the aircraft are so closely spaced. This analysis quantified TAs and RAs using TCAS II Change 7.1 and ACAS Xa software with simulated IM PA operations. The results show no RAs using either TCAS II Change 7.1 or ACAS Xa, negligible TAs using TCAS II Change 7.1, and acceptable numbers of TAs using ACAS Xa software during simulated PA operations.
READ LESS

Summary

Interval Management (IM) is an FAA Next-Gen Automatic Dependent Surveillance – Broadcast (ADS-B) In application designed to decrease the variability in spacing between aircraft, thereby increasing the efficiency of the National Airspace System (NAS). One application within IM is Paired Approach (PA). In a PA operation, the lead aircraft and...

READ MORE

Toward distributed control for reconfigurable robust microgrids

Published in:
2020 IEEE Energy Conversion Congress and Exposition, ECCE, 11-15 October 2020.
R&D group:

Summary

Microgrids have been seen as a good solution to providing power to forward-deployed military forces. However, compatibility, robustness and stability of current solutions are often questionable. To overcome some of these problems, we first propose a theoretically-sound modeling method which defines common microgrid component interfaces using power and rate of change of power. Using this modeling approach, we propose a multi-layered distributed control: the higher control layer participates in dynamic power management that ensures acceptable voltage, while the lower layer stabilizes frequency by regulating the dynamics to the power determined by the higher layer. Numerical and hardware tests are conducted to evaluate the effectiveness of the proposed control.
READ LESS

Summary

Microgrids have been seen as a good solution to providing power to forward-deployed military forces. However, compatibility, robustness and stability of current solutions are often questionable. To overcome some of these problems, we first propose a theoretically-sound modeling method which defines common microgrid component interfaces using power and rate of...

READ MORE

Image processing pipeline for liver fibrosis classification using ultrasound shear wave elastography

Published in:
Ultrasound in Med. & Biol., Vol. 46, No. 10, October 2020, pp. 2667-2676.

Summary

The purpose of this study was to develop an automated method for classifying liver fibrosis stage >=F2 based on ultrasound shear wave elastography (SWE) and to assess the system's performance in comparison with a reference manual approach. The reference approach consists of manually selecting a region of interest from each of eight or more SWE images, computing the mean tissue stiffness within each of the regions of interest and computing a resulting stiffness value as the median of the means. The 527-subject database consisted of 5526 SWE images and pathologist-scored biopsies, with data collected from a single system at a single site. The automated method integrates three modules that assess SWE image quality, select a region of interest from each SWE measurement and perform machine learning-based, multi-image SWE classification for fibrosis stage >=F2. Several classification methods were developed and tested using fivefold cross-validation with training, validation and test sets partitioned by subject. Performance metrics were area under receiver operating characteristic curve (AUROC), specificity at 95% sensitivity and number of SWE images required. The final automated method yielded an AUROC of 0.93 (95% confidence interval: 0.90-0.94) versus 0.69 (95% confidence interval: 0.65-0.72) for the reference method, 71% specificity with 95% sensitivity versus 5% and four images per decision versus eight or more. In conclusion, the automated method reported in this study significantly improved the accuracy for >=F2 classification of SWE measurements as well as reduced the number of measurements needed, which has the potential to reduce clinical workflow.
READ LESS

Summary

The purpose of this study was to develop an automated method for classifying liver fibrosis stage >=F2 based on ultrasound shear wave elastography (SWE) and to assess the system's performance in comparison with a reference manual approach. The reference approach consists of manually selecting a region of interest from each...

READ MORE

Weather radar network benefit model for nontornadic thunderstorm wind casualty cost reduction

Author:
Published in:
Wea. Climate Soc., Vol. 12, No. 4, October 2020, pp. 789-804.

Summary

An econometric geospatial benefit model for nontornadic thunderstorm wind casualty reduction is developed for meteorological radar network planning. Regression analyses on 22 years (1998–2019) of storm event and warning data show, likely for the first time, a clear dependence of nontornadic severe thunderstorm warning performance on radar coverage. Furthermore, nontornadic thunderstorm wind casualty rates are observed to be negatively correlated with better warning performance. In combination, these statistical relationships form the basis of a cost model that can be differenced between radar network configurations to generate geospatial benefit density maps. This model, applied to the current contiguous U.S. weather radar network, yields a benefit estimate of $207 million (M) yr^-1 relative to no radar coverage at all. The remaining benefit pool with respect to enhanced radar coverage and scan update rate is about $36M yr^-1. Aggregating these nontornadic thunderstorm wind results with estimates from earlier tornado and flash flood cost reduction models yields a total benefit of $1.12 billion yr^-1 for the present-day radars and a remaining radar-based benefit pool of $778M yr^-1.
READ LESS

Summary

An econometric geospatial benefit model for nontornadic thunderstorm wind casualty reduction is developed for meteorological radar network planning. Regression analyses on 22 years (1998–2019) of storm event and warning data show, likely for the first time, a clear dependence of nontornadic severe thunderstorm warning performance on radar coverage. Furthermore, nontornadic...

READ MORE

A multi-task LSTM framework for improved early sepsis prediction

Published in:
Proc. Artificial Intelligence in Medicine, AIME, 2020, pp. 49-58.

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using times series of physiological measurements. Furthermore, physiological data is often missing and imputation is necessary. Absence of data might arise due to decisions made by clinical professionals which carries information. Using the missing data patterns into the learning process can further guide how much trust to place on imputed values. A new multi-task LSTM model is proposed that takes informative missingness into account during training that effectively attributes trust to temporal measurements. Experimental results demonstrate our method outperforms conventional CNN and LSTM models on the PhysioNet-2019 CiC early sepsis prediction challenge in terms of area under receiver-operating curve and precision-recall curve, and further improves upon calibration of prediction scores.
READ LESS

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using...

READ MORE

Enhanced parallel simulation for ACAS X development

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

ACAS X is the next generation airborne collision avoidance system intended to meet the demands of the rapidly evolving U.S. National Airspace System (NAS). The collision avoidance safety and operational suitability of the system are optimized and continuously evaluated by simulating billions of characteristic aircraft encounters in a fast-time Monte Carlo environment. There is therefore an inherent computational cost associated with each ACAS X design iteration and parallelization of the simulations is necessary to keep up with rapid design cycles. This work describes an effort to profile and enhance the parallel computing infrastructure deployed on the computing resources offered by the Lincoln Laboratory Supercomputing Center. The approach to large-scale parallelization of our fast-time airspace encounter simulation tool is presented along with corresponding parallel profile data collected on different kinds of compute nodes. A simple stochastic model for distributed simulation is also presented to inform optimal work batching for improved simulation efficiency. The paper concludes with a discussion on how this high-performance parallel simulation method enables the rapid safety-critical design of ACAS X in a fast-paced iterative design process.
READ LESS

Summary

ACAS X is the next generation airborne collision avoidance system intended to meet the demands of the rapidly evolving U.S. National Airspace System (NAS). The collision avoidance safety and operational suitability of the system are optimized and continuously evaluated by simulating billions of characteristic aircraft encounters in a fast-time Monte...

READ MORE

Processing of crowdsourced observations of aircraft in a high performance computing environment

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

As unmanned aircraft systems (UASs) continue to integrate into the U.S. National Airspace System (NAS), there is a need to quantify the risk of airborne collisions between unmanned and manned aircraft to support regulation and standards development. Both regulators and standards developing organizations have made extensive use of Monte Carlo collision risk analysis simulations using probabilistic models of aircraft flight. We've previously determined that the observations of manned aircraft by the OpenSky Network, a community network of ground-based sensors, are appropriate to develop models of the low altitude environment. This works overviews the high performance computing workflow designed and deployed on the Lincoln Laboratory Supercomputing Center to process 3.9 billion observations of aircraft. We then trained the aircraft models using more than 250,000 flight hours at 5,000 feet above ground level or below. A key feature of the workflow is that all the aircraft observations and supporting datasets are available as open source technologies or been released to the public domain.
READ LESS

Summary

As unmanned aircraft systems (UASs) continue to integrate into the U.S. National Airspace System (NAS), there is a need to quantify the risk of airborne collisions between unmanned and manned aircraft to support regulation and standards development. Both regulators and standards developing organizations have made extensive use of Monte Carlo...

READ MORE

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

GraphChallenge.org sparse deep neural network performance [e-print]

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The sparse DNN challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. In 2019 several sparse DNN challenge submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art sparse DNN execution time, TDNN, is a strong function of the number of DNN operations performed, Nop. The sparse DNN challenge provides a clear picture of current sparse DNN systems and underscores the need for new innovations to achieve high performance on very large sparse DNNs.
READ LESS

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective...

READ MORE

Fast training of deep neural networks robust to adversarial perturbations

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent work in adversarial training, a form of robust optimization in which the model is optimized against adversarial examples, demonstrates the ability to improve performance sensitivities to perturbations and yield feature representations that are more interpretable. Adversarial training, however, comes with an increased computational cost over that of standard (i.e., nonrobust) training, rendering it impractical for use in largescale problems. Recent work suggests that a fast approximation to adversarial training shows promise for reducing training time and maintaining robustness in the presence of perturbations bounded by the infinity norm. In this work, we demonstrate that this approach extends to the Euclidean norm and preserves the human-aligned feature representations that are common for robust models. Additionally, we show that using a distributed training scheme can further reduce the time to train robust deep networks. Fast adversarial training is a promising approach that will provide increased security and explainability in machine learning applications for which robust optimization was previously thought to be impractical.
READ LESS

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent...

READ MORE