Publications

Refine Results

(Filters Applied) Clear All

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

A hardware root-of-trust design for low-power SoC edge devices

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

In this work, we introduce a hardware root-of-trust architecture for low-power edge devices. An accelerator-based SoC design that includes the hardware root-of-trust architecture is developed. An example application for the device is presented. We examine attacks based on physical access given the significant threat they pose to unattended edge systems. The hardware root-of-trust provides security features to ensure the integrity of the SoC execution environment when deployed in uncontrolled, unattended locations. E-fused boot memory ensures the boot code and other security critical software is not compromised after deployment. Digitally signed programmable instruction memory prevents execution of code from untrusted sources. A programmable finite state machine is used to enforce access policies to device resources even if the application software on the device is compromised. Access policies isolate the execution states of application and security-critical software. The hardware root-of-trust architecture saves energy with a lower hardware overhead than a separate secure enclave while eliminating software attack surfaces for access control policies.
READ LESS

Summary

In this work, we introduce a hardware root-of-trust architecture for low-power edge devices. An accelerator-based SoC design that includes the hardware root-of-trust architecture is developed. An example application for the device is presented. We examine attacks based on physical access given the significant threat they pose to unattended edge systems...

READ MORE

A hands-on middle-school robotics software program at MIT

Summary

Robotics competitions at the high school level attract a large number of students across the world. However, there is little emphasis on leveraging robotics to get middle school students excited about pursuing STEM education. In this paper, we describe a new program that targets middle school students in a local, four-week setting at the Massachusetts Institute of Technology (MIT). It aims to excite students by teaching the very basics of computer vision and robotics. The students program mini car-like robots, equipped with state-of-the-art computers, to navigate autonomously in a mock race track. We describe the hardware and software infrastructure that enables the program, the details of our curriculum, and the results of a short assessment. In addition, we describe four short programs, as well as a session where we teach high school teachers how to teach similar courses at their schools to their own students. The self-assessment indicates that the students feel more confident in programming and robotics after leaving the program, which we hope will enable them to pursue STEM education and robotics initiatives at school.
READ LESS

Summary

Robotics competitions at the high school level attract a large number of students across the world. However, there is little emphasis on leveraging robotics to get middle school students excited about pursuing STEM education. In this paper, we describe a new program that targets middle school students in a local...

READ MORE

Toward an autonomous aerial survey and planning system for humanitarian aid and disaster response

Summary

In this paper we propose an integrated system concept for autonomously surveying and planning emergency response for areas impacted by natural disasters. Referred to as AASAPS-HADR, this system is composed of a network of ground stations and autonomous aerial vehicles interconnected by an ad hoc emergency communication network. The system objectives are three-fold: to provide situational awareness of the evolving disaster event, to generate dispatch and routing plans for emergency vehicles, and to provide continuous communication networks which augment pre-existing communication infrastructure that may have been damaged or destroyed. Lacking development in previous literature, we give particular emphasis to the situational awareness objective of disaster response by proposing an autonomous aerial survey that is tasked with assessing damage to existing road networks, detecting and locating human victims, and providing a cursory assessment of casualty types that can be used to inform medical response priorities. In this paper we provide a high-level system design concept, identify existing AI perception and planning algorithms that most closely suit our purposes as well as technology gaps within those algorithms, and provide initial experimental results for non-contact health monitoring using real-time pose recognition algorithms running on a Nvidia Jetson TX2 mounted on board a quadrotor UAV. Finally we provide technology development recommendations for future phases of the AASAPS-HADR system.
READ LESS

Summary

In this paper we propose an integrated system concept for autonomously surveying and planning emergency response for areas impacted by natural disasters. Referred to as AASAPS-HADR, this system is composed of a network of ground stations and autonomous aerial vehicles interconnected by an ad hoc emergency communication network. The system...

READ MORE

GraphChallenge.org: raising the bar on graph analytic performance

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. Graph Challenge 2017 received 22 submissions by 111 authors from 36 organizations. The submissions highlighted graph analytic innovations in hardware, software, algorithms, systems, and visualization. These submissions produced many comparable performance measurements that can be used for assessing the current state of the art of the field. There were numerous submissions that implemented the triangle counting challenge and resulted in over 350 distinct measurements. Analysis of these submissions show that their execution time is a strong function of the number of edges in the graph, Ne, and is typically proportional to N4=3 e for large values of Ne. Combining the model fits of the submissions presents a picture of the current state of the art of graph analysis, which is typically 108 edges processed per second for graphs with 108 edges. These results are 30 times faster than serial implementations commonly used by many graph analysts and underscore the importance of making these performance benefits available to the broader community. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs.
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

Adversarial co-evolution of attack and defense in a segmented computer network environment

Published in:
Proc. Genetic and Evolutionary Computation Conf. Companion, GECCO 2018, 15-19 July 2018, pp. 1648-1655.

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended defensive strategy, deployed against the threat of serial network security attacks that delay the mission of the network's operator. We employ a simulation model to investigate the effectiveness over time of different defensive strategies against different attack strategies. For a set of four network topologies, we generate strong availability attack patterns that were not identified a priori. Then, by combining the simulation with a coevolutionary algorithm to explore the adversaries' action spaces, we identify effective configurations that minimize mission delay when facing the attacks. The novel application of co-evolutionary computation to enterprise network security represents a step toward course-of-action determination that is robust to responses by intelligent adversaries.
READ LESS

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended...

READ MORE

Learning network architectures of deep CNNs under resource constraints

Published in:
Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 18-22 June 2018, pp. 1784-91.

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully automated model learning approach that (1) treats architecture selection as part of the learning process, (2) uses a blend of broad-based random sampling and adaptive iterative refinement to explore the solution space, (3) performs optimization subject to given memory and computational constraints imposed by target deployment scenarios, and (4) is scalable and can use only a practically small number of GPUs for training. We present results that show graceful model degradation under strict resource constraints for object classification problems using CIFAR-10 in our experiments. We also discuss future work in further extending the approach.
READ LESS

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully...

READ MORE

Bringing physical construction and real-world data collection into a massively open online course (MOOC)

Summary

This Work-In-Progress paper details the process and lessons learned when converting a hands-on engineering minicourse to a scalable, self-paced Massively Open Online Course (MOOC). Online courseware has been part of academic and industry training and learning for decades. Learning activities in online courses strive to mimic in-person delivery by including lectures, homework assignments, software exercises and exams. While these instructional activities provide "theory and practice" for many disciplines, engineering courses often require hands-on activities with physical tools, devices and equipment. To accommodate the need for this type of learning, MIT Lincoln Laboratory's "Build A Small Radar" (BSR) course was used to explore teaching and learning strategies that support the inclusion of physical construction and real world data collection in a MOOC. These tasks are encountered across a range of engineering disciplines and the methods illustrated here are easily generalized to the learning experiences in engineering and science disciplines.
READ LESS

Summary

This Work-In-Progress paper details the process and lessons learned when converting a hands-on engineering minicourse to a scalable, self-paced Massively Open Online Course (MOOC). Online courseware has been part of academic and industry training and learning for decades. Learning activities in online courses strive to mimic in-person delivery by including...

READ MORE

Streaming graph challenge: stochastic block partition

Summary

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.
READ LESS

Summary

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive...

READ MORE

Static graph challenge: subgraph isomorphism

Summary

The rise of graph analytic systems has created a need for ways to measure and compare the capabilities of these systems. Graph analytics present unique scalability difficulties. The machine learning, high performance computing, and visual analytics communities have wrestled with these difficulties for decades and developed methodologies for creating challenges to move these communities forward. The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems. The Subgraph Isomorphism Graph Challenge is a holistic specification with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. Subgraph isomorphism is amenable to both vertex-centric implementations and array-based implementations (e.g., using the Graph-BLAS.org standard). The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed graph challenge is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance have been measured. Specifications, data, and software are publicly available at GraphChallenge.org.
READ LESS

Summary

The rise of graph analytic systems has created a need for ways to measure and compare the capabilities of these systems. Graph analytics present unique scalability difficulties. The machine learning, high performance computing, and visual analytics communities have wrestled with these difficulties for decades and developed methodologies for creating challenges...

READ MORE