Publications

Refine Results

(Filters Applied) Clear All

Geospatial QPE accuracy dependence on weather radar network configurations

Published in:
J. Appl. Meteor. Climatol., Vol. 59, No. 1, 2020, pp. 1773-92.

Summary

The relatively low density of weather radar networks can lead to low-altitude coverage gaps. As existing networks are evaluated for gap-fillers and new networks are designed, the benefits of low-altitude coverage must be assessed quantitatively. This study takes a regression approach to modeling quantitative precipitation estimation (QPE) differences based on network density, antenna aperture, and polarimetric bias. Thousands of cases from the warm-season months of May–August 2015–2017 are processed using both the specific attenuation [R(A)] and reflectivity-differential reflectivity [R(Z,ZDR)] QPE methods and are compared against Automated Surface Observing System (ASOS) rain gauge data. QPE errors are quantified based on beam height, cross-radial resolution, added polarimetric bias, and observed rainfall rate. The collected data are used to construct a support vector machine regression model that is applied to the current WSR-88D network for holistic error quantification. An analysis of the effects of polarimetric bias on flash-flood rainfall rates is presented. Rainfall rates based on 2-year/1-hr return rates are used for a CONUS-wide analysis of QPE errors in extreme rainfall situations. These errors are then re-quantified using previously proposed network design scenarios with additional radars that provide enhanced estimate capabilities. Finally, a gap-filling scenario utilizing the QPE error model, flash-flood rainfall rates, population density, and potential additional WSR-88D sites is presented, exposing the highest-benefit coverage holes in augmenting the WSR-88D network (or a future network) relative to QPE performance.
READ LESS

Summary

The relatively low density of weather radar networks can lead to low-altitude coverage gaps. As existing networks are evaluated for gap-fillers and new networks are designed, the benefits of low-altitude coverage must be assessed quantitatively. This study takes a regression approach to modeling quantitative precipitation estimation (QPE) differences based on...

READ MORE

Automated posterior interval evaluation for inference in probabilistic programming

Author:
Published in:
Intl. Conf. on Probabilistic Programming, PROBPROG, 22 October 2020.

Summary

In probabilistic inference, credible intervals constructed from posterior samples provide ranges of likely values for continuous parameters of interest. Intuitively, an inference procedure is optimal if it produces the most precise posterior intervals that cover the true parameter value with the expected frequency in repeated experiments. We present theories and methods for automating posterior interval evaluation of inference performance in probabilistic programming using two metrics: 1.) truth coverage, and 2.) ratio of the empirical over the ideal interval widths. Demonstrating with inference on popular regression and state-space models, we show how the metrics provide effective comparisons between different inference procedures, and capture the effects of collinearity and model misspecification. Overall, we claim such automated interval evaluation can accelerate the robust design and comparison of probabilistic inference programs by directly diagnosing how accurately and precisely they can estimate parameters of interest.
READ LESS

Summary

In probabilistic inference, credible intervals constructed from posterior samples provide ranges of likely values for continuous parameters of interest. Intuitively, an inference procedure is optimal if it produces the most precise posterior intervals that cover the true parameter value with the expected frequency in repeated experiments. We present theories and...

READ MORE

Failure prediction by confidence estimation of uncertainty-aware Dirichlet networks

Published in:
https://arxiv.org/abs/2010.09865

Summary

Reliably assessing model confidence in deep learning and predicting errors likely to be made are key elements in providing safety for model deployment, in particular for applications with dire consequences. In this paper, it is first shown that uncertainty-aware deep Dirichlet neural networks provide an improved separation between the confidence of correct and incorrect predictions in the true class probability (TCP) metric. Second, as the true class is unknown at test time, a new criterion is proposed for learning the true class probability by matching prediction confidence scores while taking imbalance and TCP constraints into account for correct predictions and failures. Experimental results show our method improves upon the maximum class probability (MCP) baseline and predicted TCP for standard networks on several image classification tasks with various network architectures.
READ LESS

Summary

Reliably assessing model confidence in deep learning and predicting errors likely to be made are key elements in providing safety for model deployment, in particular for applications with dire consequences. In this paper, it is first shown that uncertainty-aware deep Dirichlet neural networks provide an improved separation between the confidence...

READ MORE

TCAS II and ACAS Xa traffic and resolution advisories during interval management paired approach operations

Published in:
2020 AIAA/IEEE 39th Digital Avionics Systems Conf., DASC, 11-15 October 2020.

Summary

Interval Management (IM) is an FAA Next-Gen Automatic Dependent Surveillance – Broadcast (ADS-B) In application designed to decrease the variability in spacing between aircraft, thereby increasing the efficiency of the National Airspace System (NAS). One application within IM is Paired Approach (PA). In a PA operation, the lead aircraft and trail aircraft are both established on final approach to dependent parallel runways with runway centerline spacing less than 2500 feet. The trail aircraft follows speed guidance from the IM Avionics to achieve and maintain a desired spacing behind the lead aircraft. PA operations are expected to require a new separation standard that allows the aircraft to be spaced more closely than current dependent parallel separation standards. The behavior of an airborne collision avoidance system, such as TCAS II or ACAS Xa, must be considered during a new operation such as PA, because the aircraft are so closely spaced. This analysis quantified TAs and RAs using TCAS II Change 7.1 and ACAS Xa software with simulated IM PA operations. The results show no RAs using either TCAS II Change 7.1 or ACAS Xa, negligible TAs using TCAS II Change 7.1, and acceptable numbers of TAs using ACAS Xa software during simulated PA operations.
READ LESS

Summary

Interval Management (IM) is an FAA Next-Gen Automatic Dependent Surveillance – Broadcast (ADS-B) In application designed to decrease the variability in spacing between aircraft, thereby increasing the efficiency of the National Airspace System (NAS). One application within IM is Paired Approach (PA). In a PA operation, the lead aircraft and...

READ MORE

Toward distributed control for reconfigurable robust microgrids

Published in:
2020 IEEE Energy Conversion Congress and Exposition, ECCE, 11-15 October 2020.
R&D group:

Summary

Microgrids have been seen as a good solution to providing power to forward-deployed military forces. However, compatibility, robustness and stability of current solutions are often questionable. To overcome some of these problems, we first propose a theoretically-sound modeling method which defines common microgrid component interfaces using power and rate of change of power. Using this modeling approach, we propose a multi-layered distributed control: the higher control layer participates in dynamic power management that ensures acceptable voltage, while the lower layer stabilizes frequency by regulating the dynamics to the power determined by the higher layer. Numerical and hardware tests are conducted to evaluate the effectiveness of the proposed control.
READ LESS

Summary

Microgrids have been seen as a good solution to providing power to forward-deployed military forces. However, compatibility, robustness and stability of current solutions are often questionable. To overcome some of these problems, we first propose a theoretically-sound modeling method which defines common microgrid component interfaces using power and rate of...

READ MORE

Image processing pipeline for liver fibrosis classification using ultrasound shear wave elastography

Published in:
Ultrasound in Med. & Biol., Vol. 46, No. 10, October 2020, pp. 2667-2676.

Summary

The purpose of this study was to develop an automated method for classifying liver fibrosis stage >=F2 based on ultrasound shear wave elastography (SWE) and to assess the system's performance in comparison with a reference manual approach. The reference approach consists of manually selecting a region of interest from each of eight or more SWE images, computing the mean tissue stiffness within each of the regions of interest and computing a resulting stiffness value as the median of the means. The 527-subject database consisted of 5526 SWE images and pathologist-scored biopsies, with data collected from a single system at a single site. The automated method integrates three modules that assess SWE image quality, select a region of interest from each SWE measurement and perform machine learning-based, multi-image SWE classification for fibrosis stage >=F2. Several classification methods were developed and tested using fivefold cross-validation with training, validation and test sets partitioned by subject. Performance metrics were area under receiver operating characteristic curve (AUROC), specificity at 95% sensitivity and number of SWE images required. The final automated method yielded an AUROC of 0.93 (95% confidence interval: 0.90-0.94) versus 0.69 (95% confidence interval: 0.65-0.72) for the reference method, 71% specificity with 95% sensitivity versus 5% and four images per decision versus eight or more. In conclusion, the automated method reported in this study significantly improved the accuracy for >=F2 classification of SWE measurements as well as reduced the number of measurements needed, which has the potential to reduce clinical workflow.
READ LESS

Summary

The purpose of this study was to develop an automated method for classifying liver fibrosis stage >=F2 based on ultrasound shear wave elastography (SWE) and to assess the system's performance in comparison with a reference manual approach. The reference approach consists of manually selecting a region of interest from each...

READ MORE

Weather radar network benefit model for nontornadic thunderstorm wind casualty cost reduction

Author:
Published in:
Wea. Climate Soc., Vol. 12, No. 4, October 2020, pp. 789-804.

Summary

An econometric geospatial benefit model for nontornadic thunderstorm wind casualty reduction is developed for meteorological radar network planning. Regression analyses on 22 years (1998–2019) of storm event and warning data show, likely for the first time, a clear dependence of nontornadic severe thunderstorm warning performance on radar coverage. Furthermore, nontornadic thunderstorm wind casualty rates are observed to be negatively correlated with better warning performance. In combination, these statistical relationships form the basis of a cost model that can be differenced between radar network configurations to generate geospatial benefit density maps. This model, applied to the current contiguous U.S. weather radar network, yields a benefit estimate of $207 million (M) yr^-1 relative to no radar coverage at all. The remaining benefit pool with respect to enhanced radar coverage and scan update rate is about $36M yr^-1. Aggregating these nontornadic thunderstorm wind results with estimates from earlier tornado and flash flood cost reduction models yields a total benefit of $1.12 billion yr^-1 for the present-day radars and a remaining radar-based benefit pool of $778M yr^-1.
READ LESS

Summary

An econometric geospatial benefit model for nontornadic thunderstorm wind casualty reduction is developed for meteorological radar network planning. Regression analyses on 22 years (1998–2019) of storm event and warning data show, likely for the first time, a clear dependence of nontornadic severe thunderstorm warning performance on radar coverage. Furthermore, nontornadic...

READ MORE

A multi-task LSTM framework for improved early sepsis prediction

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using times series of physiological measurements. Furthermore, physiological data is often missing and imputation is necessary. Absence of data might arise due to decisions made by clinical professionals which carries information. Using the missing data patterns into the learning process can further guide how much trust to place on imputed values. A new multi-task LSTM model is proposed that takes informative missingness into account during training that effectively attributes trust to temporal measurements. Experimental results demonstrate our method outperforms conventional CNN and LSTM models on the PhysioNet-2019 CiC early sepsis prediction challenge in terms of area under receiver-operating curve and precision-recall curve, and further improves upon calibration of prediction scores.
READ LESS

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using...

READ MORE

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

GraphChallenge.org sparse deep neural network performance [e-print]

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The sparse DNN challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. In 2019 several sparse DNN challenge submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art sparse DNN execution time, TDNN, is a strong function of the number of DNN operations performed, Nop. The sparse DNN challenge provides a clear picture of current sparse DNN systems and underscores the need for new innovations to achieve high performance on very large sparse DNNs.
READ LESS

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective...

READ MORE