Publications

Refine Results

(Filters Applied) Clear All

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

A framework to improve evaluation of novel decision support tools

Published in:
11th Intl. Conf. on Applied Human Factors and Ergonomics, AHFE, 16-20 July 2020.

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams that misinterpret user feedback can spend time and effort on tasks that do not improve either user acceptance or operational task conduct. This paper presents a framework developed through efforts to transition technology to the U.S. Transportation Command (USTRANSCOM). The framework formalizes aspects of user experience with technology to guide organization and development team research and assessments. The case study is examined through the lens of the framework to illustrate how user-focused methodologies can be employed by development teams to systematically improve development of new technology, user acceptance of new technology, and assessments of technology viability.
READ LESS

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams...

READ MORE

This looks like that: deep learning for interpretable image recognition

Published in:
Neural Info. Process., NIPS, 8-14 December 2019.

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The algorithm thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, geologists, architects, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training, meaning that there are no labels for parts of images. We demonstrate the method on the CIFAR-10 dataset and 10 classes from the CUB-200-2011 dataset.
READ LESS

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network...

READ MORE

On-demand forensic video analytics for large-scale surveillance systems

Published in:
2019 IEEE Intl. Symp. on Technologies for Homeland Security, 5-6 November 2019.

Summary

This work presents FOVEA, an add-on suite of analytic tools for the forensic review of video in large-scale surveillance systems. While significant investment has been made toward improving camera coverage and quality, the burden on video operators for reviewing and extracting useful information from the video has only increased. Daily investigation tasks (such as searching through video, investigating abandoned objects, or piecing together information from multiple cameras) still require a significant amount of manual review by video operators. In contrast to other tools which require exporting video data or otherwise curating the video collection before analysis, FOVEA is designed to integrate with existing surveillance systems. Tools can be applied to any video stream in an on-demand fashion without additional hardware. This paper details the technical approach, underlying algorithms, and effects on video operator performance.
READ LESS

Summary

This work presents FOVEA, an add-on suite of analytic tools for the forensic review of video in large-scale surveillance systems. While significant investment has been made toward improving camera coverage and quality, the burden on video operators for reviewing and extracting useful information from the video has only increased. Daily...

READ MORE

Feature forwarding for efficient single image dehazing

Published in:
IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 16-17 June 2019.

Summary

Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder–decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the superresolution O/I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems.
READ LESS

Summary

Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore...

READ MORE

AI enabling technologies: a survey

Summary

Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together in order to provide capabilities that can be used by decision makers, warfighters and analysts. These pieces include data collection, data conditioning, algorithms, computing, robust artificial intelligence, and human-machine teaming. While much of the popular press today surrounds advances in algorithms and computing, most modern AI systems leverage advances across numerous different fields. Further, while certain components may not be as visible to end-users as others, our experience has shown that each of these interrelated components play a major role in the success or failure of an AI system. This article is meant to highlight many of these technologies that are involved in an end-to-end AI system. The goal of this article is to provide readers with an overview of terminology, technical details and recent highlights from academia, industry and government. Where possible, we indicate relevant resources that can be used for further reading and understanding.
READ LESS

Summary

Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together...

READ MORE

Artificial intelligence: short history, present developments, and future outlook, final report

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the AI field is evolving so rapidly, the study scope was to look at the recent past and ongoing developments to lead to a set of findings and recommendations. It was important to begin with a short AI history and a lay-of-the-land on representative developments across the Department of Defense (DoD), intelligence communities (IC), and Homeland Security. These areas are addressed in more detail within the report. A main deliverable from the study was to formulate an end-to-end AI canonical architecture that was suitable for a range of applications. The AI canonical architecture, formulated in the study, serves as the guiding framework for all the sections in this report. Even though the study primarily focused on cyber security and information sciences, the enabling technologies are broadly applicable to many other areas. Therefore, we dedicate a full section on enabling technologies in Section 3. The discussion on enabling technologies helps the reader clarify the distinction among AI, machine learning algorithms, and specific techniques to make an end-to-end AI system viable. In order to understand what is the lay-of-the-land in AI, study participants performed a fairly wide reach within MIT LL and external to the Laboratory (government, commercial companies, defense industrial base, peers, academia, and AI centers). In addition to the study participants (shown in the next section under acknowledgements), we also assembled an internal review team (IRT). The IRT was extremely helpful in providing feedback and in helping with the formulation of the study briefings, as we transitioned from datagathering mode to the study synthesis. The format followed throughout the study was to highlight relevant content that substantiates the study findings, and identify a set of recommendations. An important finding is the significant AI investment by the so-called "big 6" commercial companies. These major commercial companies are Google, Amazon, Facebook, Microsoft, Apple, and IBM. They dominate in the AI ecosystem research and development (R&D) investments within the U.S. According to a recent McKinsey Global Institute report, cumulative R&D investment in AI amounts to about $30 billion per year. This amount is substantially higher than the R&D investment within the DoD, IC, and Homeland Security. Therefore, the DoD will need to be very strategic about investing where needed, while at the same time leveraging the technologies already developed and available from a wide range of commercial applications. As we will discuss in Section 1 as part of the AI history, MIT LL has been instrumental in developing advanced AI capabilities. For example, MIT LL has a long history in the development of human language technologies (HLT) by successfully applying machine learning algorithms to difficult problems in speech recognition, machine translation, and speech understanding. Section 4 elaborates on prior applications of these technologies, as well as newer applications in the context of multi-modalities (e.g., speech, text, images, and video). An end-to-end AI system is very well suited to enhancing the capabilities of human language analysis. Section 5 discusses AI's nascent role in cyber security. There have been cases where AI has already provided important benefits. However, much more research is needed in both the application of AI to cyber security and the associated vulnerability to the so-called adversarial AI. Adversarial AI is an area very critical to the DoD, IC, and Homeland Security, where malicious adversaries can disrupt AI systems and make them untrusted in operational environments. This report concludes with specific recommendations by formulating the way forward for Division 5 and a discussion of S&T challenges and opportunities. The S&T challenges and opportunities are centered on the key elements of the AI canonical architecture to strengthen the AI capabilities across the DoD, IC, and Homeland Security in support of national security.
READ LESS

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the...

READ MORE

Simulation approach to sensor placement using Unity3D

Summary

3D game simulation engines have demonstrated utility in the areas of training, scientific analysis, and knowledge solicitation. This paper will make the case for the use of 3D game simulation engines in the field of sensor placement optimization. Our study used a series of parallel simulations in the Unity3D simulation framework to answer the questions: how many sensors of various modalities are required and where they should be placed to meet a desired threat detection threshold? The result is a framework that not only answers this sensor placement question, but can be easily expanded to differing optimization criteria as well as answer how a particular configuration responds to differing crowd flows or informed/non-informed adversaries. Additionally, we demonstrate the scalability of this framework by running parallel instances on a supercomputing grid and illustrate the processing speed gained.
READ LESS

Summary

3D game simulation engines have demonstrated utility in the areas of training, scientific analysis, and knowledge solicitation. This paper will make the case for the use of 3D game simulation engines in the field of sensor placement optimization. Our study used a series of parallel simulations in the Unity3D simulation...

READ MORE

Learning network architectures of deep CNNs under resource constraints

Published in:
Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 18-22 June 2018, pp. 1784-91.

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully automated model learning approach that (1) treats architecture selection as part of the learning process, (2) uses a blend of broad-based random sampling and adaptive iterative refinement to explore the solution space, (3) performs optimization subject to given memory and computational constraints imposed by target deployment scenarios, and (4) is scalable and can use only a practically small number of GPUs for training. We present results that show graceful model degradation under strict resource constraints for object classification problems using CIFAR-10 in our experiments. We also discuss future work in further extending the approach.
READ LESS

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully...

READ MORE

Cloud computing in tactical environments

Summary

Ground personnel at the tactical edge often lack data and analytics that would increase their effectiveness. To address this problem, this work investigates methods to deploy cloud computing capabilities in tactical environments. Our approach is to identify representative applications and to design a system that spans the software/hardware stack to support such applications while optimizing the use of scarce resources. This paper presents our high-level design and the results of initial experiments that indicate the validity of our approach.
READ LESS

Summary

Ground personnel at the tactical edge often lack data and analytics that would increase their effectiveness. To address this problem, this work investigates methods to deploy cloud computing capabilities in tactical environments. Our approach is to identify representative applications and to design a system that spans the software/hardware stack to...

READ MORE