Publications

Refine Results

(Filters Applied) Clear All

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

GraphChallenge.org sparse deep neural network performance [e-print]

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The sparse DNN challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. In 2019 several sparse DNN challenge submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art sparse DNN execution time, TDNN, is a strong function of the number of DNN operations performed, Nop. The sparse DNN challenge provides a clear picture of current sparse DNN systems and underscores the need for new innovations to achieve high performance on very large sparse DNNs.
READ LESS

Summary

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective...

READ MORE

75,000,000,000 streaming inserts/second using hierarchical hypersparse GraphBLAS matrices

Summary

The SuiteSparse GraphBLAS C-library implements high performance hypersparse matrices with bindings to a variety of languages (Python, Julia, and Matlab/Octave). GraphBLAS provides a lightweight in-memory database implementation of hypersparse matrices that are ideal for analyzing many types of network data, while providing rigorous mathematical guarantees, such as linearity. Streaming updates of hypersparse matrices put enormous pressure on the memory hierarchy. This work benchmarks an implementation of hierarchical hypersparse matrices that reduces memory pressure and dramatically increases the update rate into a hypersparse matrices. The parameters of hierarchical hypersparse matrices rely on controlling the number of entries in each level in the hierarchy before an update is cascaded. The parameters are easily tunable to achieve optimal performance for a variety of applications. Hierarchical hypersparse matrices achieve over 1,000,000 updates per second in a single instance. Scaling to 31,000 instances of hierarchical hypersparse matrices arrays on 1,100 server nodes on the MIT SuperCloud achieved a sustained update rate of 75,000,000,000 updates per second. This capability allows the MIT SuperCloud to analyze extremely large streaming network data sets.
READ LESS

Summary

The SuiteSparse GraphBLAS C-library implements high performance hypersparse matrices with bindings to a variety of languages (Python, Julia, and Matlab/Octave). GraphBLAS provides a lightweight in-memory database implementation of hypersparse matrices that are ideal for analyzing many types of network data, while providing rigorous mathematical guarantees, such as linearity. Streaming updates...

READ MORE

Automated discovery of cross-plane event-based vulnerabilities in software-defined networking

Summary

Software-defined networking (SDN) achieves a programmable control plane through the use of logically centralized, event-driven controllers and through network applications (apps) that extend the controllers' functionality. As control plane decisions are often based on the data plane, it is possible for carefully crafted malicious data plane inputs to direct the control plane towards unwanted states that bypass network security restrictions (i.e., cross-plane attacks). Unfortunately, because of the complex interplay among controllers, apps, and data plane inputs, at present it is difficult to systematically identify and analyze these cross-plane vulnerabilities. We present EVENTSCOPE, a vulnerability detection tool that automatically analyzes SDN control plane event usage, discovers candidate vulnerabilities based on missing event-handling routines, and validates vulnerabilities based on data plane effects. To accurately detect missing event handlers without ground truth or developer aid, we cluster apps according to similar event usage and mark inconsistencies as candidates. We create an event flow graph to observe a global view of events and control flows within the control plane and use it to validate vulnerabilities that affect the data plane. We applied EVENTSCOPE to the ONOS SDN controller and uncovered 14 new vulnerabilities.
READ LESS

Summary

Software-defined networking (SDN) achieves a programmable control plane through the use of logically centralized, event-driven controllers and through network applications (apps) that extend the controllers' functionality. As control plane decisions are often based on the data plane, it is possible for carefully crafted malicious data plane inputs to direct the...

READ MORE

AI data wrangling with associative arrays [e-print]

Published in:
Submitted to Northeast Database Day, NEDB 2020, https://arxiv.org/abs/2001.06731

Summary

The AI revolution is data driven. AI "data wrangling" is the process by which unusable data is transformed to support AI algorithm development (training) and deployment (inference). Significant time is devoted to translating diverse data representations supporting the many query and analysis steps found in an AI pipeline. Rigorous mathematical representations of these data enables data translation and analysis optimization within and across steps. Associative array algebra provides a mathematical foundation that naturally describes the tabular structures and set mathematics that are the basis of databases. Likewise, the matrix operations and corresponding inference/training calculations used by neural networks are also well described by associative arrays. More surprisingly, a general denormalized form of hierarchical formats, such as XML and JSON, can be readily constructed. Finally, pivot tables, which are among the most widely used data analysis tools, naturally emerge from associative array constructors. A common foundation in associative arrays provides interoperability guarantees, proving that their operations are linear systems with rigorous mathematical properties, such as, associativity, commutativity, and distributivity that are critical to reordering optimizations.
READ LESS

Summary

The AI revolution is data driven. AI "data wrangling" is the process by which unusable data is transformed to support AI algorithm development (training) and deployment (inference). Significant time is devoted to translating diverse data representations supporting the many query and analysis steps found in an AI pipeline. Rigorous mathematical...

READ MORE

FirmFuzz: automated IOT firmware introspection and analysis

Published in:
2nd Workshop on the Internet of Things Security and Privacy, IoT S&P '19, 15 November 2019.

Summary

While the number of IoT devices grows at an exhilarating pace their security remains stagnant. Imposing secure coding standards across all vendors is infeasible. Testing individual devices allows an analyst to evaluate their security post deployment. Any discovered vulnerabilities can then be disclosed to the vendors in order to assist them in securing their products. The search for vulnerabilities should ideally be automated for efficiency and furthermore be device-independent for scalability. We present FirmFuzz, an automated device-independent emulation and dynamic analysis framework for Linux-based firmware images. It employs a greybox-based generational fuzzing approach coupled with static analysis and system introspection to provide targeted and deterministic bug discovery within a firmware image. We evaluate FirmFuzz by emulating and dynamically analyzing 32 images (from 27 unique devices) with a network accessible from the host performing the emulation. During testing, FirmFuzz discovered seven previously undisclosed vulnerabilities across six different devices: two IP cameras and four routers. So far, 4 CVE's have been assigned.
READ LESS

Summary

While the number of IoT devices grows at an exhilarating pace their security remains stagnant. Imposing secure coding standards across all vendors is infeasible. Testing individual devices allows an analyst to evaluate their security post deployment. Any discovered vulnerabilities can then be disclosed to the vendors in order to assist...

READ MORE

Cultivating professional technical skills and understanding through hands-on online learning experiences

Published in:
2019 IEEE Learning with MOOCS, LWMOOCS, 23-25 October 2019.

Summary

Life-long learning is necessary for all professions because the technologies, tools and skills required for success over the course of a career expand and change. Professionals in science, technology, engineering and mathematics (STEM) fields face particular challenges as new multi-disciplinary methods, e.g. Machine Learning and Artificial Intelligence, mature to replace those learned in undergraduate or graduate programs. Traditionally, industry, professional societies and university programs have provided professional development. While these provide opportunities to develop deeper understanding in STEM specialties and stay current with new techniques, the constraints on formal classes and workshops preclude the possibility of Just-In-Time Mastery Learning, particularly for new domains. The MIT Lincoln Laboratory Supercomputing Center (LLSC) and MIT Supercloud teams have developed online course offerings specifically designed to provide a way for adult learners to build their own educational path based on their immediate needs, problems and schedules. To satisfy adult learners, the courses are formulated as a series of challenges and strategies. Using this perspective, the courses incorporate targeted theory supported by hands-on practice. The focus of this paper is the design of Mastery, Just-in-Time MOOC courses that address the full space of hands-on learning requirements, from digital to analog. The discussion centers on the design of project-based exercises for professional technical education courses. The case studies highlight examples from courses that incorporate practice ranging from the construction of a small radar used for real world data collection and processing to the development of high performance computing applications.
READ LESS

Summary

Life-long learning is necessary for all professions because the technologies, tools and skills required for success over the course of a career expand and change. Professionals in science, technology, engineering and mathematics (STEM) fields face particular challenges as new multi-disciplinary methods, e.g. Machine Learning and Artificial Intelligence, mature to replace...

READ MORE

Security Design of Mission-Critical Embedded Systems

Published in:
HPEC 2019: IEEE Conf. on High Performance Extreme Computing, 22-24 September 2019.

Summary

This tutorial explains a systematic approach of co-designing functionality and security into mission-criticalembedded systems. The tutorial starts by reviewing common issues in embedded applications to define mission objectives,threat models, and security/resilience goals. We then introduce an overview of security technologies toachieve goals of confidentiality, integrity, and availability given design criteria and a realistic threatmodel. The technologies range from practical cryptography and key management, protection of data atrest, data in transit, and data in use, and tamper resistance.A major portion of the tutorial is dedicated to exploring the mission critical embedded system solutionspace. We discuss the search for security vulnerabilities (red teaming) and the search for solutions (blueteaming). Besides the lecture, attendees, under instructor guidance, will perform realistic andmeaningful hands-on exercises of defining mission and security objectives, assessing principal issues,applying technologies, and understanding their interactions. The instructor will provide an exampleapplication (distributed sensing, communicating, and computing) to be used in these exercises.Attendees could also bring their own applications for the exercises.Attendees are encouraged to work collaboratively throughout the development process, thus creatingopportunities to learn from each other. During the exercise, attendees will consider the use of varioussecurity/resilience features, articulate and justify the use of resources, and assess the system’ssuitability for mission assurance. Attendees can expect to gain valuable insight and experience in thesubject after completing the lecture and exercises.The instructor, who is an expert and practitioner in the field, will offer insight, advice, and concreteexamples and discussions. The tutorial draws from the instructor’s decades of experience in secure,resilient systems and technology.
READ LESS

Summary

This tutorial explains a systematic approach of co-designing functionality and security into mission-criticalembedded systems. The tutorial starts by reviewing common issues in embedded applications to define mission objectives,threat models, and security/resilience goals. We then introduce an overview of security technologies toachieve goals of confidentiality, integrity, and availability given design criteria...

READ MORE

Hypersparse neural network analysis of large-scale internet traffic

Published in:
IEEE High Performance Extreme Computing Conf., HPEC, 24-26 September 2019.

Summary

The Internet is transforming our society, necessitating a quantitative understanding of Internet traffic. Our team collects and curates the largest publicly available Internet traffic data containing 50 billion packets. Utilizing a novel hypersparse neural network analysis of "video" streams of this traffic using 10,000 processors in the MIT SuperCloud reveals a new phenomena: the importance of otherwise unseen leaf nodes and isolated links in Internet traffic. Our neural network approach further shows that a two-parameter modified Zipf-Mandelbrot distribution accurately describes a wide variety of source/destination statistics on moving sample windows ranging from 100,000 to 100,000,000 packets over collections that span years and continents. The inferred model parameters distinguish different network streams and the model leaf parameter strongly correlates with the fraction of the traffic in different underlying network topologies. The hypersparse neural network pipeline is highly adaptable and different network statistics and training models can be incorporated with simple changes to the image filter functions.
READ LESS

Summary

The Internet is transforming our society, necessitating a quantitative understanding of Internet traffic. Our team collects and curates the largest publicly available Internet traffic data containing 50 billion packets. Utilizing a novel hypersparse neural network analysis of "video" streams of this traffic using 10,000 processors in the MIT SuperCloud reveals...

READ MORE

Large scale parallelization using file-based communications

Summary

In this paper, we present a novel and new file-based communication architecture using the local filesystem for large scale parallelization. This new approach eliminates the issues with filesystem overload and resource contention when using the central filesystem for large parallel jobs. The new approach incurs additional overhead due to inter-node message file transfers when both the sending and receiving processes are not on the same node. However, even with this additional overhead cost, its benefits are far greater for the overall cluster operation in addition to the performance enhancement in message communications for large scale parallel jobs. For example, when running a 2048-process parallel job, it achieved about 34 times better performance with MPI_Bcast() when using the local filesystem. Furthermore, since the security for transferring message files is handled entirely by using the secure copy protocol (scp) and the file system permissions, no additional security measures or ports are required other than those that are typically required on an HPC system.
READ LESS

Summary

In this paper, we present a novel and new file-based communication architecture using the local filesystem for large scale parallelization. This new approach eliminates the issues with filesystem overload and resource contention when using the central filesystem for large parallel jobs. The new approach incurs additional overhead due to inter-node...

READ MORE