Publications

Refine Results

(Filters Applied) Clear All

Seasonal Inhomogeneous Nonconsecutive Arrival Process Search and Evaluation

Published in:
International Conference on Artificial Intelligence and Statistics, 26-28 August 2020 [submitted]

Summary

Seasonal data may display different distributions throughout the period of seasonality. We fit this type of model by determiningthe appropriate change points of the distribution and fitting parameters to each interval. This offers the added benefit of searching for disjoint regimes, which may denote the samedistribution occurring nonconsecutively. Our algorithm outperforms SARIMA for prediction.
READ LESS

Summary

Seasonal data may display different distributions throughout the period of seasonality. We fit this type of model by determiningthe appropriate change points of the distribution and fitting parameters to each interval. This offers the added benefit of searching for disjoint regimes, which may denote the samedistribution occurring nonconsecutively. Our algorithm...

READ MORE

Complex Network Effects on the Robustness of Graph Convolutional Networks

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in anattempt to misclassify a target node. This vulnerability decreases users’ confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents the first work aimed at leveraging network characteristics to improve robustness of these methods. Our focus is on using network features to choose the training set, rather than selecting the training set at random. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. This increase in robustness is often as substantial as tripling the amount of randomly selected training data. Even in cases where success is relatively easy for the attacker, we show that classification performance degrades much more gradually using the proposed methods, with weaker incorrect predictions for the attacked nodes. Finally, we investigate the potential tradeoff between robustness and performance in various datasets.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Data trust methodology: a blockchain-based approach to instrumenting complex systems

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one hand, newer deeper processing with more diverse data inputs can offer resilience against overconfident decisions under rapidly changing conditions. On the other hand, the multitude of diverse methods for reaching a decision may be in apparent conflict and decrease decision confidence. This has sometimes manifested itself in the presentation of simultaneous, divergent information to high-level decision makers. In some important specific instances, this has caused the operators to be less efficient in determining the best course of action. In this paper, we will describe an approach to more efficiently and effectively leverage new data sources and processing solutions, without requiring redesign of each algorithm or the system itself. We achieve this by instrumenting the processing chains with an enterprise blockchain framework. Once instrumented, we can collect, verify, and validate data processing chains by tracking data provenance using smart contracts to add dynamically calculated metadata to an immutable and distributed ledger. This noninvasive approach to verification and validation in data sharing environments has the power to improve decision confidence at larger scale than manual approaches, such as consulting individual developer subject matter experts to understand system behavior. In this paper, we will present our study of the following: 1. Types of information (i.e., knowledge) that are supplied and leveraged by decision makers and operational contextualized data processes (Figure 1) 2. Benefits to verifying data provenance, integrity, and validity within an operational processing chain 3. Our blockchain technology framework coupled with analytical techniques which leverage a verification and validation capability that could be deployed into existing DoD data-processing systems with insignificant performance and operational interference.
READ LESS

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one...

READ MORE

Antennas and RF components designed with graded index composite materials

Summary

Antennas and RF components in general, can greatly benefit with the recent development of low-loss 3D print graded index materials. The additional degrees of freedom provided by graded index materials can result in the design of antennas and other RF components with superior performance than currently available designs based on conventional constant permittivity materials. Here we discuss our work designing flat lenses for antennas and RF matching networks as well as filters based on graded index composite materials.
READ LESS

Summary

Antennas and RF components in general, can greatly benefit with the recent development of low-loss 3D print graded index materials. The additional degrees of freedom provided by graded index materials can result in the design of antennas and other RF components with superior performance than currently available designs based on...

READ MORE

Estimating sedentary breathing rate from chest-worn accelerometry from free-living data

Published in:
42nd Annual Intl. Conf. IEEE Engineering in Medicine and Biology Society, EMBC, 20-24 July 2020.

Summary

Breathing rate was estimated from chest-worn accelerometry collected from 1,522 servicemembers during training by a wearable physiological monitor. A total of 29,189 hours of training and sleep data were analyzed. The primary purpose of the monitor was to assess thermal-work strain and avoid heat injuries. The monitor design was thus not optimized to estimate breathing rate. Since breathing rate cannot be accurately estimated during periods of high activity, a qualifier was applied to identify sedentary time periods, totaling 8,867 hours. Breathing rate was estimated for a total of 4,179 hours, or 14% of the total collection and 47% of the sedentary total, primarily during periods of sleep. The breathing rate estimation method was compared to an FDA 510(K)-cleared criterion breathing rate sensor (Zephyr, Annapolis MD, USA) in a controlled laboratory experiment, which showed good agreement between the two techniques. Contributions of this paper are to: 1) provide the first analysis of accelerometry-derived breathing rate on free-living data including periods of high activity as well as sleep, along with a qualifier that effectively identifies sedentary periods appropriate for estimating breathing rate; 2) test breathing rate estimation on a data set with a total duration that is more than 60 times longer than that of the largest previously reported study, 3) test breathing rate estimation on data from a physiological monitor that has not been expressly designed for that purpose.
READ LESS

Summary

Breathing rate was estimated from chest-worn accelerometry collected from 1,522 servicemembers during training by a wearable physiological monitor. A total of 29,189 hours of training and sleep data were analyzed. The primary purpose of the monitor was to assess thermal-work strain and avoid heat injuries. The monitor design was thus...

READ MORE

A framework to improve evaluation of novel decision support tools

Published in:
11th Intl. Conf. on Applied Human Factors and Ergonomics, AHFE, 16-20 July 2020.

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams that misinterpret user feedback can spend time and effort on tasks that do not improve either user acceptance or operational task conduct. This paper presents a framework developed through efforts to transition technology to the U.S. Transportation Command (USTRANSCOM). The framework formalizes aspects of user experience with technology to guide organization and development team research and assessments. The case study is examined through the lens of the framework to illustrate how user-focused methodologies can be employed by development teams to systematically improve development of new technology, user acceptance of new technology, and assessments of technology viability.
READ LESS

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams...

READ MORE

Detect-and-avoid closed-loop evaluation of noncooperative well clear definitions

Published in:
J. Air Transp., Vol. 28, No. 4, 12 July 2020, pp. 195-206.

Summary

Four candidate detect-and-avoid well clear definitions for unmanned aircraft systems encountering noncooperative aircraft are evaluated using safety and operational suitability metrics. These candidates were proposed in previous research based on unmitigated collision risk, maneuver initiation ranges, and other considerations. Noncooperative aircraft refer to aircraft without a functioning transponder. One million encounters representative of the assumed operational environment for the detect-and-avoid system are simulated using a benchmark alerting and guidance algorithm as well as a pilot response model. Results demonstrate sensitivity of the safety metrics to the unmanned aircraft’s speed and the detect-and-avoid system's surveillance volume. The only candidate without a horizontal time threshold, named modified tau, outperforms the other three candidates in avoiding losses of detect and avoid well clear. Furthermore, this candidate's alerting timeline lowers the required surveillance range. This can help reduce the barrier of enabling unmanned aircraft systems' operations with low size, weight, and power sensors.
READ LESS

Summary

Four candidate detect-and-avoid well clear definitions for unmanned aircraft systems encountering noncooperative aircraft are evaluated using safety and operational suitability metrics. These candidates were proposed in previous research based on unmitigated collision risk, maneuver initiation ranges, and other considerations. Noncooperative aircraft refer to aircraft without a functioning transponder. One million...

READ MORE

Graded index dielectric superstrate for phased array scan compensation

Published in:
2020 IEEE Intl. Symp. on Antennas and Propagation and North American Radio Science Meeting, 5-10 July 2020.

Summary

This paper presents the use of additively manufactured graded index materials to improve the scan impedance variation of phased arrays. Solvent-based extrusion permits the programmatically controlled printing of chosen material properties with relative permittivity ranging from 2 to 24.5. This low-loss material shows promise for improvement of scan impedance variation for high scan angles in a smaller footprint than discrete layers.
READ LESS

Summary

This paper presents the use of additively manufactured graded index materials to improve the scan impedance variation of phased arrays. Solvent-based extrusion permits the programmatically controlled printing of chosen material properties with relative permittivity ranging from 2 to 24.5. This low-loss material shows promise for improvement of scan impedance variation...

READ MORE

Predicting cognitive load and operational performance in a simulated marksmanship task

Summary

Modern operational environments can place significant demands on a service member's cognitive resources, increasing the risk of errors or mishaps due to overburden. The ability to monitor cognitive burden and associated performance within operational environments is critical to improving mission readiness. As a key step toward a field-ready system, we developed a simulated marksmanship scenario with an embedded working memory task in an immersive virtual reality environment. As participants performed the marksmanship task, they were instructed to remember numbered targets and recall the sequence of those targets at the end of the trial. Low and high cognitive load conditions were defined as the recall of three- and six-digit strings, respectively. Physiological and behavioral signals recorded included speech, heart rate, breathing rate, and body movement. These features were input into a random forest classifier that significantly discriminated between the low- and high-cognitive load conditions (AUC=0.94). Behavioral features of gait were the most informative, followed by features of speech. We also showed the capability to predict performance on the digit recall (AUC = 0.71) and marksmanship (AUC = 0.58) tasks. The experimental framework can be leveraged in future studies to quantify the interaction of other types of stressors and their impact on operational cognitive and physical performance.
READ LESS

Summary

Modern operational environments can place significant demands on a service member's cognitive resources, increasing the risk of errors or mishaps due to overburden. The ability to monitor cognitive burden and associated performance within operational environments is critical to improving mission readiness. As a key step toward a field-ready system, we...

READ MORE

Adjoint analysis of guidance systems for time-series inputs using Fourier analysis

Author:
Published in:
J. Guid., Control, Dyn., Vol. 43, No. 7, July 2020.

Summary

The adjoint technique is a proven technique for analysis of linear time-varying systems and is widely used in the missile design community. It is a very efficient technique that can solve for both deterministic and stochastic disturbances and can develop a miss distance budget in a single computer solution of the differential equations without use of time-consuming Monte Carlo simulations. The adjoint technique is very valuable in both preliminary and more advanced missile design stages and is based upon the mathematical adjoint of the system dynamics matrix of the homing loop. Zarchan [1,2] describes extensive use of the technique for a variety of disturbances for homing missiles, and this author has developed its use for command guided missiles [3]. For adjoint analysis, the usual method of modeling maneuver disturbances to a missile guidance system starts by modeling the maneuver in the forward-time system as a delta function input into a transfer function with the same second-order statistics as the maneuver, and its output is input into the guidance system; then the system is converted into its adjoint system [1]. Bucco and Weiss [4] show that a set of nonstandard time-varying inputs cannot be treated in the normal fashion [2,5,6], and they present a new technique that enables these nonstandard inputs to be analyzed using adjoint analysis. This paper was inspired by and extends the results of the paper by Bucco and Weiss [4]. This paper shows that the use of the complex digital Fourier amplitude spectrums of both the maneuver and the adjoint impulse response at the maneuver point allows adjoint analysis to address another type of nonstandard input, namely, an arbitrary time-series inputs such as specific target maneuvers that are not representable by an impulse input into a transfer function; heretofore, these time-series inputs have not been treatable with adjoint analysis. Additionally, if there are several sets of arbitrary time series of target maneuvers, each with an associated probability of occurrence, the root-mean-square (rms) value of the set of probabilistic maneuvers can be calculated, another significant new capability introduced in this paper.
READ LESS

Summary

The adjoint technique is a proven technique for analysis of linear time-varying systems and is widely used in the missile design community. It is a very efficient technique that can solve for both deterministic and stochastic disturbances and can develop a miss distance budget in a single computer solution of...

READ MORE