Publications

Refine Results

(Filters Applied) Clear All

Health-informed policy gradients for multi-agent reinforcement learning

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then extended to a multi-agent variant of the proximal policy optimization algorithm and demonstrated on simple particle environments that have elements of system health, risk-taking, semi-expendable agents, and partial observability. We show significant improvement in learning performance compared to policy gradient methods that do not perform multi-agent credit assignment.
READ LESS

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then...

READ MORE

Beyond expertise and roles: a framework to characterize the stakeholders of interpretable machine learning and their needs

Published in:
Proc. Conf. on Human Factors in Computing Systems, 8-13 May 2021, article no. 74.

Summary

To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders' knowledge from their interpretability needs. We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu. We additionally distill a hierarchical typology of stakeholder needs that distinguishes higher-level domain goals from lower-level interpretability tasks. In assessing the descriptive, evaluative, and generative powers of our framework, we find our more nuanced treatment of stakeholders reveals gaps and opportunities in the interpretability literature, adds precision to the design and comparison of user studies, and facilitates a more reflexive approach to conducting this research.
READ LESS

Summary

To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples...

READ MORE

Failure prediction by confidence estimation of uncertainty-aware Dirichlet networks

Published in:
https://arxiv.org/abs/2010.09865

Summary

Reliably assessing model confidence in deep learning and predicting errors likely to be made are key elements in providing safety for model deployment, in particular for applications with dire consequences. In this paper, it is first shown that uncertainty-aware deep Dirichlet neural networks provide an improved separation between the confidence of correct and incorrect predictions in the true class probability (TCP) metric. Second, as the true class is unknown at test time, a new criterion is proposed for learning the true class probability by matching prediction confidence scores while taking imbalance and TCP constraints into account for correct predictions and failures. Experimental results show our method improves upon the maximum class probability (MCP) baseline and predicted TCP for standard networks on several image classification tasks with various network architectures.
READ LESS

Summary

Reliably assessing model confidence in deep learning and predicting errors likely to be made are key elements in providing safety for model deployment, in particular for applications with dire consequences. In this paper, it is first shown that uncertainty-aware deep Dirichlet neural networks provide an improved separation between the confidence...

READ MORE

Towards a distributed framework for multi-agent reinforcement learning research

Summary

Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into the limits of these approaches as computation increases. In this paper, we present a distributed RL training framework designed for super computing infrastructures such as the MIT SuperCloud. We review a collection of challenging learning environments—such as Google Research Football, StarCraft II, and Multi-Agent Mujoco— which are at the frontier of reinforcement learning research. We provide results on these environments that illustrate the current state of the field on these problems. Finally, we also quantify and discuss the computational requirements needed for performing RL research by enumerating all experiments performed on these environments.
READ LESS

Summary

Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into...

READ MORE

Fast training of deep neural networks robust to adversarial perturbations

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent work in adversarial training, a form of robust optimization in which the model is optimized against adversarial examples, demonstrates the ability to improve performance sensitivities to perturbations and yield feature representations that are more interpretable. Adversarial training, however, comes with an increased computational cost over that of standard (i.e., nonrobust) training, rendering it impractical for use in largescale problems. Recent work suggests that a fast approximation to adversarial training shows promise for reducing training time and maintaining robustness in the presence of perturbations bounded by the infinity norm. In this work, we demonstrate that this approach extends to the Euclidean norm and preserves the human-aligned feature representations that are common for robust models. Additionally, we show that using a distributed training scheme can further reduce the time to train robust deep networks. Fast adversarial training is a promising approach that will provide increased security and explainability in machine learning applications for which robust optimization was previously thought to be impractical.
READ LESS

Summary

Deep neural networks are capable of training fast and generalizing well within many domains. Despite their promising performance, deep networks have shown sensitivities to perturbations of their inputs (e.g., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness. Recent...

READ MORE

Deep implicit coordination graphs for multi-agent reinforcement learning [e-print]

Summary

Multi-agent reinforcement learning (MARL) requires coordination to efficiently solve certain tasks. Fully centralized control is often infeasible in such domains due to the size of joint action spaces. Coordination graph based formalization allows reasoning about the joint action based on the structure of interactions. However, they often require domain expertise in their design. This paper introduces the deep implicit coordination graph (DICG) architecture for such scenarios. DICG consists of a module for inferring the dynamic coordination graph structure which is then used by a graph neural network based module to learn to implicitly reason about the joint actions or values. DICG allows learning the tradeoff between full centralization and decentralization via standard actor-critic methods to significantly improve coordination for domains with large number of agents. We apply DICG to both centralized-training-centralized-execution and centralized-training-decentralized-execution regimes. We demonstrate that DICG solves the relative overgeneralization pathology in predatory-prey tasks as well as outperforms various MARL baselines on the challenging StarCraft II Multi-agent Challenge (SMAC) and traffic junction environments.
READ LESS

Summary

Multi-agent reinforcement learning (MARL) requires coordination to efficiently solve certain tasks. Fully centralized control is often infeasible in such domains due to the size of joint action spaces. Coordination graph based formalization allows reasoning about the joint action based on the structure of interactions. However, they often require domain expertise...

READ MORE

Topological effects on attacks against vertex classification

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful. We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget. On four citation datasets, we demonstrate that if the training set includes high degree vertices or vertices that ensure all unlabeled nodes have neighbors in the training set, we show that the adversary's budget often increases by a substantial factor---often a factor of 2 or more---over random training for the Nettack poisoning attack. Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes. In addition, we demonstrate that this robustness either persists when recently proposed defenses are applied, or is competitive with the resulting performance improvement for the defender.
READ LESS

Summary

Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two...

READ MORE

Augmented Annotation Phase 3

Author:
Published in:
MIT Lincoln Laboratory Report TR-1248

Summary

Automated visual object detection is an important capability in reducing the burden on human operators in many DoD applications. To train modern deep learning algorithms to recognize desired objects, the algorithms must be "fed" more than 1000 labeled images (for 55%–85% accuracy according to project Maven - Oct 2017 O6, Working Group slide 27) of each particular object. The task of labeling training data for use in machine learning algorithms is human intensive, requires special software, and takes a great deal of time. Estimates from ImageNet, a widely used and publicly available visual object detection dataset, indicate that humans generated four annotations per minute in the overall production of ImageNet annotations. DoD's need is to reduce direct object-by-object human labeling particularly in the video domain where data quantity can be significant. The Augmented Annotations System addresses this need by leveraging a small amount of human annotation effort to propagate human initiated annotations through video to build an initial labeled dataset for training an object detector, and utilizing an automated object detector in an iterative loop to assist humans in pre-annotating new datasets.
READ LESS

Summary

Automated visual object detection is an important capability in reducing the burden on human operators in many DoD applications. To train modern deep learning algorithms to recognize desired objects, the algorithms must be "fed" more than 1000 labeled images (for 55%–85% accuracy according to project Maven - Oct 2017 O6...

READ MORE

Safe predictors for enforcing input-output specifications [e-print]

Summary

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.
READ LESS

Summary

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via...

READ MORE

Survey and benchmarking of machine learning accelerators

Published in:
IEEE High Performance Extreme Computing Conf., HPEC, 24-26 September 2019.

Summary

Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications. These advances, along with breakdowns of several trends including Moore's Law, have prompted an explosion of processors and accelerators that promise even greater computational and machine learning capabilities. These processors and accelerators are coming in many forms, from CPUs and GPUs to ASICs, FPGAs, and dataflow accelerators. This paper surveys the current state of these processors and accelerators that have been publicly announced with performance and power consumption numbers. The performance and power values are plotted on a scatter graph and a number of dimensions and observations from the trends on this plot are discussed and analyzed. For instance, there are interesting trends in the plot regarding power consumption, numerical precision, and inference versus training. We then select and benchmark two commercially-available low size, weight, and power (SWaP) accelerators as these processors are the most interesting for embedded and mobile machine learning inference applications that are most applicable to the DoD and other SWaP constrained users. We determine how they actually perform with real-world images and neural network models, compare those results to the reported performance and power consumption values and evaluate them against an Intel CPU that is used in some embedded applications.
READ LESS

Summary

Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications. These advances, along with breakdowns of several trends including Moore's Law, have prompted an explosion of processors and accelerators that promise even greater computational and...

READ MORE