Publications

Refine Results

(Filters Applied) Clear All

Fast training of deep neural networks robust to adversarial perturbations

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent work in adversarial training, a form of robust optimization in which the model is optimized against adversarial examples, demonstrates the ability to improve performance sensitivities to perturbations and yield feature representations that are more interpretable. Adversarial training, however, comes with an increased computational cost over that of standard (i.e., nonrobust) training, rendering it impractical for use in largescale problems. Recent work suggests that a fast approximation to adversarial training shows promise for reducing training time and maintaining robustness in the presence of perturbations bounded by the infinity norm. In this work, we demonstrate that this approach extends to the Euclidean norm and preserves the human-aligned feature representations that are common for robust models. Additionally, we show that using a distributed training scheme can further reduce the time to train robust deep networks. Fast adversarial training is a promising approach that will provide increased security and explainability in machine learning applications for which robust optimization was previously thought to be impractical.
READ LESS

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent...

READ MORE

Human balance models optimized using a large-scale, parallel architecture with applications to mild traumatic brain injury

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

Static and dynamic balance are frequently disrupted through brain injuries. The impairment can be complex and for mild traumatic brain injury (mTBI) can be undetectable by standard clinical tests. Therefore, neurologically relevant modeling approaches are needed for detection and inference of mechanisms of injury. The current work presents models of static and dynamic balance that have a high degree of correspondence. Emphasizing structural similarity between the domains facilitates development of both. Furthermore, particular attention is paid to components of sensory feedback and sensory integration to ground mechanisms in neurobiology. Models are adapted to fit experimentally collected data from 10 healthy control volunteers and 11 mild traumatic brain injury volunteers. Through an analysis by synthesis approach whose implementation was made possible by a state-of-the-art high performance computing system, we derived an interpretable, model based feature set that could classify mTBI and controls in a static balance task with an ROC AUC of 0.72.
READ LESS

Summary

Static and dynamic balance are frequently disrupted through brain injuries. The impairment can be complex and for mild traumatic brain injury (mTBI) can be undetectable by standard clinical tests. Therefore, neurologically relevant modeling approaches are needed for detection and inference of mechanisms of injury. The current work presents models of...

READ MORE

Attacking Embeddings to Counter Community Detection

Published in:
Network Science Society Conference 2020 [submitted]

Summary

Community detection can be an extremely useful data triage tool, enabling a data analyst to split a largenetwork into smaller portions for a deeper analysis. If, however, a particular node wanted to avoid scrutiny, it could strategically create new connections that make it seem uninteresting. In this work, we investigate theuse of a state-of-the-art attack against node embedding as a means of countering community detection whilebeing blind to the attributes of others. The attack proposed in [1] attempts to maximize the loss function beingminimized by a random-walk-based embedding method (where two nodes are made closer together the more often a random walk starting at one node ends at the other). We propose using this method to attack thecommunity structure of the graph, specifically attacking the community assignment of an adversarial vertex. Since nodes in the same community tend to appear near each other in a random walk, their continuous-space embedding also tend to be close. Thus, we aim to use the general embedding attack in an attempt to shift the community membership of the adversarial vertex. To test this strategy, we adopt an experimental framework as in [2], where each node is given a “temperature” indicating how interesting it is. A node’s temperature can be “hot,” “cold,” or “unknown.” A node can perturbitself by adding new edges to any other node in the graph. The node’s goal is to be placed in a community thatis cold, i.e., where the average node temperature is less than 0. Of the 5 attacks proposed in [2], we use 2 in our experiments. The simpler attack is Cold and Lonely, which first connects to cold nodes, then unknown, then hot, and connects within each temperature in order of increasing degree. The more sophisticated attack is StableStructure. The procedure for this attack is to (1) identify stable structures (containing nodes assigned to the same community each time for several trials), (2) connect to nodes in order of increasing average temperature of their stable structures (randomly within a structure), and (3) connect to nodes with no stable structure in order of increasing temperature. As in [2], we use the Louvain modularity maximization technique for community detection. We slightly modify the embedding attack of [1] by only allowing addition of new edges and requiring that they include the adversary vertex. Since the embedding attack is blind to the temperatures of the nodes, experimenting with these attacks gives insight into how much this attribute information helps the adversary. Experimental results are shown in Figure 1. Graphs considered in these experiments are (1) an 500-node Erdos-Renyi graph with edge probabilityp= 0.02, (2) a stochastic block model with 5 communities of 100nodes each and edge probabilities ofpin= 0.06 andpout= 0.01, (3) the network of Abu Sayyaf Group (ASG)—aviolent non-state Islamist group operating in the Philippines—where two nodes are linked if they both participatein at least one kidnapping event, with labels derived from stable structures (nodes together in at least 95% of 1000 Louvain trials), and (4) the Cora machine learning citation graph, with 7 classes based on subjectarea. Temperature is assigned to the Erdos-Renyi nodes randomly with probability 0.25, 0.5, and 0.25 for hot,unknown, and cold, respectively. For the other graphs, nodes with the same label as the target are hot, unknown,and cold with probability 0.35, 0.55, and 0.1, respectively, and the hot and cold probabilities are swapped forother labels. The results demonstrate that, even without the temperature information, the embedding methodis about as effective as the Cold and Lonely when there is community structure to exploit, though it is not aseffective as Stable Structure, which leverages both community structure and temperature information.
READ LESS

Summary

Community detection can be an extremely useful data triage tool, enabling a data analyst to split a largenetwork into smaller portions for a deeper analysis. If, however, a particular node wanted to avoid scrutiny, it could strategically create new connections that make it seem uninteresting. In this work, we investigate...

READ MORE

Sensorimotor conflict tests in an immersive virtual environment reveal subclinical impairments in mild traumatic brain injury

Summary

Current clinical tests lack the sensitivity needed for detecting subtle balance impairments associated with mild traumatic brain injury (mTBI). Patient-reported symptoms can be significant and have a huge impact on daily life, but impairments may remain undetected or poorly quantified using clinical measures. Our central hypothesis was that provocative sensorimotor perturbations, delivered in a highly instrumented, immersive virtual environment, would challenge sensory subsystems recruited for balance through conflicting multi-sensory evidence, and therefore reveal that not all subsystems are performing optimally. The results show that, as compared to standard clinical tests, the provocative perturbations illuminate balance impairments in subjects who have had mild traumatic brain injuries. Perturbations delivered while subjects were walking provided greater discriminability (average accuracy ≈ 0.90) than those delivered during standing (average accuracy ≈ 0.65) between mTBI subjects and healthy controls. Of the categories of features extracted to characterize balance, the lower limb accelerometry-based metrics proved to be most informative. Further, in response to perturbations, subjects with an mTBI utilized hip strategies more than ankle strategies to prevent loss of balance and also showed less variability in gait patterns. We have shown that sensorimotor conflicts illuminate otherwise-hidden balance impairments, which can be used to increase the sensitivity of current clinical procedures. This augmentation is vital in order to robustly detect the presence of balance impairments after mTBI and potentially define a phenotype of balance dysfunction that enhances risk of injury.
READ LESS

Summary

Current clinical tests lack the sensitivity needed for detecting subtle balance impairments associated with mild traumatic brain injury (mTBI). Patient-reported symptoms can be significant and have a huge impact on daily life, but impairments may remain undetected or poorly quantified using clinical measures. Our central hypothesis was that provocative sensorimotor...

READ MORE

Seasonal Inhomogeneous Nonconsecutive Arrival Process Search and Evaluation

Published in:
International Conference on Artificial Intelligence and Statistics, 26-28 August 2020 [submitted]

Summary

Seasonal data may display different distributions throughout the period of seasonality. We fit this type of model by determiningthe appropriate change points of the distribution and fitting parameters to each interval. This offers the added benefit of searching for disjoint regimes, which may denote the samedistribution occurring nonconsecutively. Our algorithm outperforms SARIMA for prediction.
READ LESS

Summary

Seasonal data may display different distributions throughout the period of seasonality. We fit this type of model by determiningthe appropriate change points of the distribution and fitting parameters to each interval. This offers the added benefit of searching for disjoint regimes, which may denote the samedistribution occurring nonconsecutively. Our algorithm...

READ MORE

Complex Network Effects on the Robustness of Graph Convolutional Networks

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in anattempt to misclassify a target node. This vulnerability decreases users’ confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents the first work aimed at leveraging network characteristics to improve robustness of these methods. Our focus is on using network features to choose the training set, rather than selecting the training set at random. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. This increase in robustness is often as substantial as tripling the amount of randomly selected training data. Even in cases where success is relatively easy for the attacker, we show that classification performance degrades much more gradually using the proposed methods, with weaker incorrect predictions for the attacked nodes. Finally, we investigate the potential tradeoff between robustness and performance in various datasets.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation net-works or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

Data trust methodology: a blockchain-based approach to instrumenting complex systems

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one hand, newer deeper processing with more diverse data inputs can offer resilience against overconfident decisions under rapidly changing conditions. On the other hand, the multitude of diverse methods for reaching a decision may be in apparent conflict and decrease decision confidence. This has sometimes manifested itself in the presentation of simultaneous, divergent information to high-level decision makers. In some important specific instances, this has caused the operators to be less efficient in determining the best course of action. In this paper, we will describe an approach to more efficiently and effectively leverage new data sources and processing solutions, without requiring redesign of each algorithm or the system itself. We achieve this by instrumenting the processing chains with an enterprise blockchain framework. Once instrumented, we can collect, verify, and validate data processing chains by tracking data provenance using smart contracts to add dynamically calculated metadata to an immutable and distributed ledger. This noninvasive approach to verification and validation in data sharing environments has the power to improve decision confidence at larger scale than manual approaches, such as consulting individual developer subject matter experts to understand system behavior. In this paper, we will present our study of the following: 1. Types of information (i.e., knowledge) that are supplied and leveraged by decision makers and operational contextualized data processes (Figure 1) 2. Benefits to verifying data provenance, integrity, and validity within an operational processing chain 3. Our blockchain technology framework coupled with analytical techniques which leverage a verification and validation capability that could be deployed into existing DoD data-processing systems with insignificant performance and operational interference.
READ LESS

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one...

READ MORE

Antennas and RF components designed with graded index composite materials

Summary

Antennas and RF components in general, can greatly benefit with the recent development of low-loss 3D print graded index materials. The additional degrees of freedom provided by graded index materials can result in the design of antennas and other RF components with superior performance than currently available designs based on conventional constant permittivity materials. Here we discuss our work designing flat lenses for antennas and RF matching networks as well as filters based on graded index composite materials.
READ LESS

Summary

Antennas and RF components in general, can greatly benefit with the recent development of low-loss 3D print graded index materials. The additional degrees of freedom provided by graded index materials can result in the design of antennas and other RF components with superior performance than currently available designs based on...

READ MORE

Estimating sedentary breathing rate from chest-worn accelerometry from free-living data

Published in:
42nd Annual Intl. Conf. IEEE Engineering in Medicine and Biology Society, EMBC, 20-24 July 2020.

Summary

Breathing rate was estimated from chest-worn accelerometry collected from 1,522 servicemembers during training by a wearable physiological monitor. A total of 29,189 hours of training and sleep data were analyzed. The primary purpose of the monitor was to assess thermal-work strain and avoid heat injuries. The monitor design was thus not optimized to estimate breathing rate. Since breathing rate cannot be accurately estimated during periods of high activity, a qualifier was applied to identify sedentary time periods, totaling 8,867 hours. Breathing rate was estimated for a total of 4,179 hours, or 14% of the total collection and 47% of the sedentary total, primarily during periods of sleep. The breathing rate estimation method was compared to an FDA 510(K)-cleared criterion breathing rate sensor (Zephyr, Annapolis MD, USA) in a controlled laboratory experiment, which showed good agreement between the two techniques. Contributions of this paper are to: 1) provide the first analysis of accelerometry-derived breathing rate on free-living data including periods of high activity as well as sleep, along with a qualifier that effectively identifies sedentary periods appropriate for estimating breathing rate; 2) test breathing rate estimation on a data set with a total duration that is more than 60 times longer than that of the largest previously reported study, 3) test breathing rate estimation on data from a physiological monitor that has not been expressly designed for that purpose.
READ LESS

Summary

Breathing rate was estimated from chest-worn accelerometry collected from 1,522 servicemembers during training by a wearable physiological monitor. A total of 29,189 hours of training and sleep data were analyzed. The primary purpose of the monitor was to assess thermal-work strain and avoid heat injuries. The monitor design was thus...

READ MORE

A framework to improve evaluation of novel decision support tools

Published in:
11th Intl. Conf. on Applied Human Factors and Ergonomics, AHFE, 16-20 July 2020.

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams that misinterpret user feedback can spend time and effort on tasks that do not improve either user acceptance or operational task conduct. This paper presents a framework developed through efforts to transition technology to the U.S. Transportation Command (USTRANSCOM). The framework formalizes aspects of user experience with technology to guide organization and development team research and assessments. The case study is examined through the lens of the framework to illustrate how user-focused methodologies can be employed by development teams to systematically improve development of new technology, user acceptance of new technology, and assessments of technology viability.
READ LESS

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams...

READ MORE