Publications

Refine Results

(Filters Applied) Clear All

Backdoor poisoning of encrypted traffic classifiers

Summary

Significant recent research has focused on applying deep neural network models to the problem of network traffic classification. At the same time, much has been written about the vulnerability of deep neural networks to adversarial inputs, both during training and inference. In this work, we consider launching backdoor poisoning attacks against an encrypted network traffic classifier. We consider attacks based on padding network packets, which has the benefit of preserving the functionality of the network traffic. In particular, we consider a handcrafted attack, as well as an optimized attack leveraging universal adversarial perturbations. We find that poisoning attacks can be extremely successful if the adversary has the ability to modify both the labels and the data (dirty label attacks) and somewhat successful, depending on the attack strength and the target class, if the adversary perturbs only the data (clean label attacks).
READ LESS

Summary

Significant recent research has focused on applying deep neural network models to the problem of network traffic classification. At the same time, much has been written about the vulnerability of deep neural networks to adversarial inputs, both during training and inference. In this work, we consider launching backdoor poisoning attacks...

READ MORE

System analysis for responsible design of modern AI/ML systems

Summary

The irresponsible use of ML algorithms in practical settings has received a lot of deserved attention in the recent years. We posit that the traditional system analysis perspective is needed when designing and implementing ML algorithms and systems. Such perspective can provide a formal way for evaluating and enabling responsible ML practices. In this paper, we review components of the System Analysis methodology and highlight how they connect and enable responsible practices of ML design.
READ LESS

Summary

The irresponsible use of ML algorithms in practical settings has received a lot of deserved attention in the recent years. We posit that the traditional system analysis perspective is needed when designing and implementing ML algorithms and systems. Such perspective can provide a formal way for evaluating and enabling responsible...

READ MORE

Probabilistic coordination of heterogeneous teams from capability temporal logic specifications

Summary

This letter explores coordination of heterogeneous teams of agents from high-level specifications. We employ Capability Temporal Logic (CaTL) to express rich, temporal-spatial tasks that require cooperation between many agents with unique capabilities. CaTL specifies combinations of tasks, each with desired locations, duration, and set of capabilities, freeing the user from considering specific agent trajectories and their impact on multi-agent cooperation. CaTL also provides a quantitative robustness metric of satisfaction based on availability of required capabilities for each task. The novelty of this letter focuses on satisfaction of CaTL formulas under probabilistic conditions. Specifically, we consider uncertainties in robot motion (e.g., agents may fail to transition between regions with some probability) and local probabilistic workspace properties (e.g., if there are not enough agents of a required capability to complete a collaborative task). The proposed approach automatically formulates amixed-integer linear program given agents, their dynamics and capabilities, an abstraction of the workspace, and a CaTL formula. In addition to satisfying the given CaTL formula, the optimization considers the following secondary goals (in decreasing order of priority): 1) minimize the risk of transition failure due to uncertainties; 2) maximize probabilities of regional collaborative satisfaction (if there is an excess of agents); 3) maximize the availability robustness of CaTL for potential agent attrition; 4) minimize the total agent travel time. We evaluate the performance of the proposed framework and demonstrate its scalability via numerical simulations.
READ LESS

Summary

This letter explores coordination of heterogeneous teams of agents from high-level specifications. We employ Capability Temporal Logic (CaTL) to express rich, temporal-spatial tasks that require cooperation between many agents with unique capabilities. CaTL specifies combinations of tasks, each with desired locations, duration, and set of capabilities, freeing the user from...

READ MORE

Fast decomposition of temporal logic specifications for heterogeneous teams

Published in:
IEEE Robot. Autom. Lett., Vol. 7, No. 2, April 2022, pp. 2297-2304.

Summary

We focus on decomposing large multi-agent path planning problems with global temporal logic goals (common to all agents) into smaller sub-problems that can be solved and executed independently. Crucially, the sub-problems' solutions must jointly satisfy the common global mission specification. The agents' missions are given as Capability Temporal Logic (CaTL) formulas, a fragment of Signal Temporal Logic (STL) that can express properties over tasks involving multiple agent capabilities (i.e., different combinations of sensors, effectors, and dynamics) under strict timing constraints. We jointly decompose both the temporal logic specification and the team of agents, using a satisfiability modulo theories (SMT) approach and heuristics for handling temporal operators. The output of the SMT is then distributed to subteams and leads to a significant speed up in planning time compared to planning for the entire team and specification. We include computational results to evaluate the efficiency of our solution, as well as the trade-offs introduced by the conservative nature of the SMT encoding and heuristics.
READ LESS

Summary

We focus on decomposing large multi-agent path planning problems with global temporal logic goals (common to all agents) into smaller sub-problems that can be solved and executed independently. Crucially, the sub-problems' solutions must jointly satisfy the common global mission specification. The agents' missions are given as Capability Temporal Logic (CaTL)...

READ MORE

Tools and practices for responsible AI engineering

Summary

Responsible Artificial Intelligence (AI)—the practice of developing, evaluating, and maintaining accurate AI systems that also exhibit essential properties such as robustness and explainability—represents a multifaceted challenge that often stretches standard machine learning tooling, frameworks, and testing methods beyond their limits. In this paper, we present two new software libraries—hydra-zen and the rAI-toolbox—that address critical needs for responsible AI engineering. hydra-zen dramatically simplifies the process of making complex AI applications configurable, and their behaviors reproducible. The rAI-toolbox is designed to enable methods for evaluating and enhancing the robustness of AI-models in a way that is scalable and that composes naturally with other popular ML frameworks. We describe the design principles and methodologies that make these tools effective, including the use of property-based testing to bolster the reliability of the tools themselves. Finally, we demonstrate the composability and flexibility of the tools by showing how various use cases from adversarial robustness and explainable AI can be concisely implemented with familiar APIs.
READ LESS

Summary

Responsible Artificial Intelligence (AI)—the practice of developing, evaluating, and maintaining accurate AI systems that also exhibit essential properties such as robustness and explainability—represents a multifaceted challenge that often stretches standard machine learning tooling, frameworks, and testing methods beyond their limits. In this paper, we present two new software libraries—hydra-zen and...

READ MORE

Scalable and Robust Algorithms for Task-Based Coordination From High-Level Specifications (ScRATCHeS)

Summary

Many existing approaches for coordinating heterogeneous teams of robots either consider small numbers of agents, are application-specific, or do not adequately address common real world requirements, e.g., strict deadlines or intertask dependencies. We introduce scalable and robust algorithms for task-based coordination from high-level specifications (ScRATCHeS) to coordinate such teams. We define a specification language, capability temporal logic, to describe rich, temporal properties involving tasks requiring the participation of multiple agents with multiple capabilities, e.g., sensors or end effectors. Arbitrary missions and team dynamics are jointly encoded as constraints in a mixed integer linear program, and solved efficiently using commercial off-the-shelf solvers. ScRATCHeS optionally allows optimization for maximal robustness to agent attrition at the penalty of increased computation time.We include an online replanning algorithm that adjusts the plan after an agent has dropped out. The flexible specification language, fast solution time, and optional robustness of ScRATCHeS provide a first step toward a multipurpose on-the-fly planning tool for tasking large teams of agents with multiple capabilities enacting missions with multiple tasks. We present randomized computational experiments to characterize scalability and hardware demonstrations to illustrate the applicability of our methods.
READ LESS

Summary

Many existing approaches for coordinating heterogeneous teams of robots either consider small numbers of agents, are application-specific, or do not adequately address common real world requirements, e.g., strict deadlines or intertask dependencies. We introduce scalable and robust algorithms for task-based coordination from high-level specifications (ScRATCHeS) to coordinate such teams. We...

READ MORE

Principles for evaluation of AI/ML model performance and robustness, revision 1

Summary

The Department of Defense (DoD) has significantly increased its investment in the design, evaluation, and deployment of Artificial Intelligence and Machine Learning (AI/ML) capabilities to address national security needs. While there are numerous AI/ML successes in the academic and commercial sectors, many of these systems have also been shown to be brittle and nonrobust. In a complex and ever-changing national security environment, it is vital that the DoD establish a sound and methodical process to evaluate the performance and robustness of AI/ML models before these new capabilities are deployed to the field. Without an effective evaluation process, the DoD may deploy AI/ML models that are assumed to be effective given limited evaluation metrics but actually have poor performance and robustness on operational data. Poor evaluation practices lead to loss of trust in AI/ML systems by model operators and more frequent--often costly--design updates needed to address the evolving security environment. In contrast, an effective evaluation process can drive the design of more resilient capabilities, ag potential limitations of models before they are deployed, and build operator trust in AI/ML systems. This paper reviews the AI/ML development process, highlights common best practices for AI/ML model evaluation, and makes the following recommendations to DoD evaluators to ensure the deployment of robust AI/ML capabilities for national security needs: -Develop testing datasets with sufficient variation and number of samples to effectively measure the expected performance of the AI/ML model on future (unseen) data once deployed, -Maintain separation between data used for design and evaluation (i.e., the test data is not used to design the AI/ML model or train its parameters) in order to ensure an honest and unbiased assessment of the model's capability, -Evaluate performance given small perturbations and corruptions to data inputs to assess the smoothness of the AI/ML model and identify potential vulnerabilities, and -Evaluate performance on samples from data distributions that are shifted from the assumed distribution that was used to design the AI/ML model to assess how the model may perform on operational data that may differ from the training data. By following the recommendations for evaluation presented in this paper, the DoD can fully take advantage of the AI/ML revolution, delivering robust capabilities that maintain operational feasibility over longer periods of time, and increase warfighter confidence in AI/ML systems.
READ LESS

Summary

The Department of Defense (DoD) has significantly increased its investment in the design, evaluation, and deployment of Artificial Intelligence and Machine Learning (AI/ML) capabilities to address national security needs. While there are numerous AI/ML successes in the academic and commercial sectors, many of these systems have also been shown to...

READ MORE

Fast training of deep neural networks robust to adversarial perturbations

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent work in adversarial training, a form of robust optimization in which the model is optimized against adversarial examples, demonstrates the ability to improve performance sensitivities to perturbations and yield feature representations that are more interpretable. Adversarial training, however, comes with an increased computational cost over that of standard (i.e., nonrobust) training, rendering it impractical for use in largescale problems. Recent work suggests that a fast approximation to adversarial training shows promise for reducing training time and maintaining robustness in the presence of perturbations bounded by the infinity norm. In this work, we demonstrate that this approach extends to the Euclidean norm and preserves the human-aligned feature representations that are common for robust models. Additionally, we show that using a distributed training scheme can further reduce the time to train robust deep networks. Fast adversarial training is a promising approach that will provide increased security and explainability in machine learning applications for which robust optimization was previously thought to be impractical.
READ LESS

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent...

READ MORE

Safe predictors for enforcing input-output specifications [e-print]

Summary

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.
READ LESS

Summary

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via...

READ MORE

AI enabling technologies: a survey

Summary

Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together in order to provide capabilities that can be used by decision makers, warfighters and analysts. These pieces include data collection, data conditioning, algorithms, computing, robust artificial intelligence, and human-machine teaming. While much of the popular press today surrounds advances in algorithms and computing, most modern AI systems leverage advances across numerous different fields. Further, while certain components may not be as visible to end-users as others, our experience has shown that each of these interrelated components play a major role in the success or failure of an AI system. This article is meant to highlight many of these technologies that are involved in an end-to-end AI system. The goal of this article is to provide readers with an overview of terminology, technical details and recent highlights from academia, industry and government. Where possible, we indicate relevant resources that can be used for further reading and understanding.
READ LESS

Summary

Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together...

READ MORE

Showing Results

1-10 of 20