Publications

Refine Results

(Filters Applied) Clear All

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

A hardware root-of-trust design for low-power SoC edge devices

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

In this work, we introduce a hardware root-of-trust architecture for low-power edge devices. An accelerator-based SoC design that includes the hardware root-of-trust architecture is developed. An example application for the device is presented. We examine attacks based on physical access given the significant threat they pose to unattended edge systems. The hardware root-of-trust provides security features to ensure the integrity of the SoC execution environment when deployed in uncontrolled, unattended locations. E-fused boot memory ensures the boot code and other security critical software is not compromised after deployment. Digitally signed programmable instruction memory prevents execution of code from untrusted sources. A programmable finite state machine is used to enforce access policies to device resources even if the application software on the device is compromised. Access policies isolate the execution states of application and security-critical software. The hardware root-of-trust architecture saves energy with a lower hardware overhead than a separate secure enclave while eliminating software attack surfaces for access control policies.
READ LESS

Summary

In this work, we introduce a hardware root-of-trust architecture for low-power edge devices. An accelerator-based SoC design that includes the hardware root-of-trust architecture is developed. An example application for the device is presented. We examine attacks based on physical access given the significant threat they pose to unattended edge systems...

READ MORE

Data trust methodology: a blockchain-based approach to instrumenting complex systems

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one hand, newer deeper processing with more diverse data inputs can offer resilience against overconfident decisions under rapidly changing conditions. On the other hand, the multitude of diverse methods for reaching a decision may be in apparent conflict and decrease decision confidence. This has sometimes manifested itself in the presentation of simultaneous, divergent information to high-level decision makers. In some important specific instances, this has caused the operators to be less efficient in determining the best course of action. In this paper, we will describe an approach to more efficiently and effectively leverage new data sources and processing solutions, without requiring redesign of each algorithm or the system itself. We achieve this by instrumenting the processing chains with an enterprise blockchain framework. Once instrumented, we can collect, verify, and validate data processing chains by tracking data provenance using smart contracts to add dynamically calculated metadata to an immutable and distributed ledger. This noninvasive approach to verification and validation in data sharing environments has the power to improve decision confidence at larger scale than manual approaches, such as consulting individual developer subject matter experts to understand system behavior. In this paper, we will present our study of the following: 1. Types of information (i.e., knowledge) that are supplied and leveraged by decision makers and operational contextualized data processes (Figure 1) 2. Benefits to verifying data provenance, integrity, and validity within an operational processing chain 3. Our blockchain technology framework coupled with analytical techniques which leverage a verification and validation capability that could be deployed into existing DoD data-processing systems with insignificant performance and operational interference.
READ LESS

Summary

Increased data sharing and interoperability has created challenges in maintaining a level of trust and confidence in Department of Defense (DoD) systems. As tightly-coupled, unique, static, and rigorously validated mission processing solutions have been supplemented with newer, more dynamic, and complex counterparts, mission effectiveness has been impacted. On the one...

READ MORE

A hands-on middle-school robotics software program at MIT

Summary

Robotics competitions at the high school level attract a large number of students across the world. However, there is little emphasis on leveraging robotics to get middle school students excited about pursuing STEM education. In this paper, we describe a new program that targets middle school students in a local, four-week setting at the Massachusetts Institute of Technology (MIT). It aims to excite students by teaching the very basics of computer vision and robotics. The students program mini car-like robots, equipped with state-of-the-art computers, to navigate autonomously in a mock race track. We describe the hardware and software infrastructure that enables the program, the details of our curriculum, and the results of a short assessment. In addition, we describe four short programs, as well as a session where we teach high school teachers how to teach similar courses at their schools to their own students. The self-assessment indicates that the students feel more confident in programming and robotics after leaving the program, which we hope will enable them to pursue STEM education and robotics initiatives at school.
READ LESS

Summary

Robotics competitions at the high school level attract a large number of students across the world. However, there is little emphasis on leveraging robotics to get middle school students excited about pursuing STEM education. In this paper, we describe a new program that targets middle school students in a local...

READ MORE

Toward an autonomous aerial survey and planning system for humanitarian aid and disaster response

Summary

In this paper we propose an integrated system concept for autonomously surveying and planning emergency response for areas impacted by natural disasters. Referred to as AASAPS-HADR, this system is composed of a network of ground stations and autonomous aerial vehicles interconnected by an ad hoc emergency communication network. The system objectives are three-fold: to provide situational awareness of the evolving disaster event, to generate dispatch and routing plans for emergency vehicles, and to provide continuous communication networks which augment pre-existing communication infrastructure that may have been damaged or destroyed. Lacking development in previous literature, we give particular emphasis to the situational awareness objective of disaster response by proposing an autonomous aerial survey that is tasked with assessing damage to existing road networks, detecting and locating human victims, and providing a cursory assessment of casualty types that can be used to inform medical response priorities. In this paper we provide a high-level system design concept, identify existing AI perception and planning algorithms that most closely suit our purposes as well as technology gaps within those algorithms, and provide initial experimental results for non-contact health monitoring using real-time pose recognition algorithms running on a Nvidia Jetson TX2 mounted on board a quadrotor UAV. Finally we provide technology development recommendations for future phases of the AASAPS-HADR system.
READ LESS

Summary

In this paper we propose an integrated system concept for autonomously surveying and planning emergency response for areas impacted by natural disasters. Referred to as AASAPS-HADR, this system is composed of a network of ground stations and autonomous aerial vehicles interconnected by an ad hoc emergency communication network. The system...

READ MORE

GraphChallenge.org: raising the bar on graph analytic performance

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. Graph Challenge 2017 received 22 submissions by 111 authors from 36 organizations. The submissions highlighted graph analytic innovations in hardware, software, algorithms, systems, and visualization. These submissions produced many comparable performance measurements that can be used for assessing the current state of the art of the field. There were numerous submissions that implemented the triangle counting challenge and resulted in over 350 distinct measurements. Analysis of these submissions show that their execution time is a strong function of the number of edges in the graph, Ne, and is typically proportional to N4=3 e for large values of Ne. Combining the model fits of the submissions presents a picture of the current state of the art of graph analysis, which is typically 108 edges processed per second for graphs with 108 edges. These results are 30 times faster than serial implementations commonly used by many graph analysts and underscore the importance of making these performance benefits available to the broader community. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs.
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

Detecting intracranial hemorrhage with deep learning

Published in:
40th Int. Conf. of the IEEE Engineering in Medicine and Biology Society, EMBC, 17-21 July 2018.

Summary

Initial results are reported on automated detection of intracranial hemorrhage from CT, which would be valuable in a computer-aided diagnosis system to help the radiologist detect subtle hemorrhages. Previous work has taken a classic approach involving multiple steps of alignment, image processing, image corrections, handcrafted feature extraction, and classification. Our current work instead uses a deep convolutional neural network to simultaneously learn features and classification, eliminating the multiple hand-tuned steps. Performance is improved by computing the mean output for rotations of the input image. Postprocessing is additionally applied to the CNN output to significantly improve specificity. The database consists of 134 CT cases (4,300 images), divided into 60, 5, and 69 cases for training, validation, and test. Each case typically includes multiple hemorrhages. Performance on the test set was 81% sensitivity per lesion (34/42 lesions) and 98% specificity per case (45/46 cases). The sensitivity is comparable to previous results (on different datasets), but with a significantly higher specificity. In addition, insights are shared to improve performance as the database is expanded.
READ LESS

Summary

Initial results are reported on automated detection of intracranial hemorrhage from CT, which would be valuable in a computer-aided diagnosis system to help the radiologist detect subtle hemorrhages. Previous work has taken a classic approach involving multiple steps of alignment, image processing, image corrections, handcrafted feature extraction, and classification. Our...

READ MORE

Adversarial co-evolution of attack and defense in a segmented computer network environment

Published in:
Proc. Genetic and Evolutionary Computation Conf. Companion, GECCO 2018, 15-19 July 2018, pp. 1648-1655.

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended defensive strategy, deployed against the threat of serial network security attacks that delay the mission of the network's operator. We employ a simulation model to investigate the effectiveness over time of different defensive strategies against different attack strategies. For a set of four network topologies, we generate strong availability attack patterns that were not identified a priori. Then, by combining the simulation with a coevolutionary algorithm to explore the adversaries' action spaces, we identify effective configurations that minimize mission delay when facing the attacks. The novel application of co-evolutionary computation to enterprise network security represents a step toward course-of-action determination that is robust to responses by intelligent adversaries.
READ LESS

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended...

READ MORE

Learning network architectures of deep CNNs under resource constraints

Published in:
Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 18-22 June 2018, pp. 1784-91.

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully automated model learning approach that (1) treats architecture selection as part of the learning process, (2) uses a blend of broad-based random sampling and adaptive iterative refinement to explore the solution space, (3) performs optimization subject to given memory and computational constraints imposed by target deployment scenarios, and (4) is scalable and can use only a practically small number of GPUs for training. We present results that show graceful model degradation under strict resource constraints for object classification problems using CIFAR-10 in our experiments. We also discuss future work in further extending the approach.
READ LESS

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully...

READ MORE

Trust and performance in human-AI systems for multi-domain command and control

Summary

Command and Control is one of the core tenants of joint military operations, however, the nature of modern security threats, the democratization of technology globally, and the speed and scope of information flows are stressing traditional operational paradigms, necessitating a fundamental shift to better concurrently integrate and operate across multiple physical and virtual domains. In this paper, we aim to address these challenges through the proposition of three concepts that will guide the creation of integrated human-AI Command and Control systems, inspired by recent advances and successes within the commercial sector and academia. The first concept is a framework for integration of AI capabilities into the enterprise that optimizes trust and performance within the workforce. The second is an approach for facilitating multi-domain operations though realtime creation of multi-organization multi-domain task teams by dynamic management of information abstraction, teaming, and risk control. The third is a new paradigm for multi-level data security and multi-organization data sharing that will be a key enabler of joint and coalition multi-domain operation in the future. Lastly, we propose a set of recommendations towards the research, development, and instantiation of these transformative advances in Command and Control capability.
READ LESS

Summary

Command and Control is one of the core tenants of joint military operations, however, the nature of modern security threats, the democratization of technology globally, and the speed and scope of information flows are stressing traditional operational paradigms, necessitating a fundamental shift to better concurrently integrate and operate across multiple...

READ MORE