Publications

Refine Results

(Filters Applied) Clear All

Modular Aid and Power Pallet (MAPP): FY18 Energy Technical Investment Program

Published in:
MIT Lincoln Laboratory Report TIP-93

Summary

Electric power is a critical element of rapid response disaster relief efforts. Generators currently used have high failure rates and require fuel supply chains, and standardized renewable power systems are not yet available. In addition, none of these systems are designed for easy adaptation or repairs in the field to accommodate changing power needs as the relief effort progresses. To address this, the Modular Aid and Power Pallet, or MAPP, was designed to be a temporary, scalable, self-contained, user-focused power system. While some commercial systems are advertised for disaster relief systems, most are limited by mobility, custom battery assemblies (with challenges for air transport, ground mobility, or both), and the ability to power AC loads. While the first year system focused on an open architecture design with distributed DC units that could be combined to serve larger AC loads, the second year succeeded in minimizing or eliminating batteries while providing AC power for both the distributed and centralized systems. Therefore, individual modules can be distributed to power small AC loads such as laptop charging, or combined in series for larger loads such as water purification. Each module is powered by a small photovoltaic (PV) array connected to a prototype off-grid Enphase microinverter that can be used with or without energy storage. In addition, an output box for larger loads is included to provide a ground fault interrupt, under/over voltage relay, and the ability to change the system grounding to fit the needs of a more complicated system. The second year MAPP effort was divided into two phases: Phase 1 from October 2017 to March 20181 focused on refining requirements and vendor selection, and Phase 2 from March 2018 to October 20182 focusing on power electronics, working with the new Enphase microinverter, and ruggedizing the system. The end result is the Phase 2 effort has been designed, tested, and proven to form a robust AC power source that is flexible and configurable by the end user. Our testing has shown that operators can easily set up the system and adapt it to changing needs in the field.
READ LESS

Summary

Electric power is a critical element of rapid response disaster relief efforts. Generators currently used have high failure rates and require fuel supply chains, and standardized renewable power systems are not yet available. In addition, none of these systems are designed for easy adaptation or repairs in the field to...

READ MORE

Artificial intelligence: short history, present developments, and future outlook, final report

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the AI field is evolving so rapidly, the study scope was to look at the recent past and ongoing developments to lead to a set of findings and recommendations. It was important to begin with a short AI history and a lay-of-the-land on representative developments across the Department of Defense (DoD), intelligence communities (IC), and Homeland Security. These areas are addressed in more detail within the report. A main deliverable from the study was to formulate an end-to-end AI canonical architecture that was suitable for a range of applications. The AI canonical architecture, formulated in the study, serves as the guiding framework for all the sections in this report. Even though the study primarily focused on cyber security and information sciences, the enabling technologies are broadly applicable to many other areas. Therefore, we dedicate a full section on enabling technologies in Section 3. The discussion on enabling technologies helps the reader clarify the distinction among AI, machine learning algorithms, and specific techniques to make an end-to-end AI system viable. In order to understand what is the lay-of-the-land in AI, study participants performed a fairly wide reach within MIT LL and external to the Laboratory (government, commercial companies, defense industrial base, peers, academia, and AI centers). In addition to the study participants (shown in the next section under acknowledgements), we also assembled an internal review team (IRT). The IRT was extremely helpful in providing feedback and in helping with the formulation of the study briefings, as we transitioned from datagathering mode to the study synthesis. The format followed throughout the study was to highlight relevant content that substantiates the study findings, and identify a set of recommendations. An important finding is the significant AI investment by the so-called "big 6" commercial companies. These major commercial companies are Google, Amazon, Facebook, Microsoft, Apple, and IBM. They dominate in the AI ecosystem research and development (R&D) investments within the U.S. According to a recent McKinsey Global Institute report, cumulative R&D investment in AI amounts to about $30 billion per year. This amount is substantially higher than the R&D investment within the DoD, IC, and Homeland Security. Therefore, the DoD will need to be very strategic about investing where needed, while at the same time leveraging the technologies already developed and available from a wide range of commercial applications. As we will discuss in Section 1 as part of the AI history, MIT LL has been instrumental in developing advanced AI capabilities. For example, MIT LL has a long history in the development of human language technologies (HLT) by successfully applying machine learning algorithms to difficult problems in speech recognition, machine translation, and speech understanding. Section 4 elaborates on prior applications of these technologies, as well as newer applications in the context of multi-modalities (e.g., speech, text, images, and video). An end-to-end AI system is very well suited to enhancing the capabilities of human language analysis. Section 5 discusses AI's nascent role in cyber security. There have been cases where AI has already provided important benefits. However, much more research is needed in both the application of AI to cyber security and the associated vulnerability to the so-called adversarial AI. Adversarial AI is an area very critical to the DoD, IC, and Homeland Security, where malicious adversaries can disrupt AI systems and make them untrusted in operational environments. This report concludes with specific recommendations by formulating the way forward for Division 5 and a discussion of S&T challenges and opportunities. The S&T challenges and opportunities are centered on the key elements of the AI canonical architecture to strengthen the AI capabilities across the DoD, IC, and Homeland Security in support of national security.
READ LESS

Summary

The Director's Office at MIT Lincoln Laboratory (MIT LL) requested a comprehensive study on artificial intelligence (AI) focusing on present applications and future science and technology (S&T) opportunities in the Cyber Security and Information Sciences Division (Division 5). This report elaborates on the main results from the study. Since the...

READ MORE

Component standards for stable microgrids

Published in:
IEEE Trans. Power Syst., Vol. 34, No. 2, pp. 852-863. 2018.
R&D group:

Summary

This paper is motivated by the need to ensure fast microgrid stability. Modeling for purposes of establishing stability criterion and possible implementations are described. In particular, this paper proposes that highly heterogeneous microgrids comprising both conventional equipment and equipment based on rapidly emerging new technologies can be modeled as purely electric networks in order to provide intuitive insight into the issues of network stability. It is shown that the proposed model is valid for representing fast primary dynamics of diverse components (gensets, loads, PVs), assuming that slower variables are regulated by the higher-level controllers. Based on this modeling approach, an intuitively-appealing criterion is introduced requiring that components or their combined representations must behave as closed-loop passive electrical circuits. Implementing this criterion is illustrated using typical commercial feeder microgrid. Notably, these set the basis for standards which should be required for groups of components (sub grids) to ensure no fast instabilities in complex microgrids. Building the need for incrementally passive and monotonic characteristics into standards for network components may clarify the system level analysis and integration of microgrids.
READ LESS

Summary

This paper is motivated by the need to ensure fast microgrid stability. Modeling for purposes of establishing stability criterion and possible implementations are described. In particular, this paper proposes that highly heterogeneous microgrids comprising both conventional equipment and equipment based on rapidly emerging new technologies can be modeled as purely...

READ MORE

High performance computing techniques with power systems simulations

Published in:
IEEE High Performance Extreme Computing Conf., HPEC, 25-27 September 2018.
R&D group:

Summary

Small electrical networks (i.e., microgrids) and machine models (synchronous generators, induction motors) can be simulated fairly easily, on sequential processes. However, running a large simulation on a single process becomes infeasible because of complexity and timing issues. Scalability becomes an increasingly important issue for larger simulations, and the platform for running such large simulations, like the MIT Supercloud, becomes more important. The distributed computing network used to simulate an electrical network as the physical system presents new challenges, however. Different simulation models, different time steps, and different computation times for each process in the distributed computing network introduce new challenges not present with typical problems that are addressed with high performance computing techniques. A distributed computing network is established for some example electrical networks, and then adjustments are made in the parallel simulation set-up to alleviate the new kinds of challenges that come with modeling and simulating a physical system as diverse as an electrical network. Also, methods are shown to simulate the same electrical network in hundreds of milliseconds, as opposed to several seconds--a dramatic speedup once the simulation is parallelized.
READ LESS

Summary

Small electrical networks (i.e., microgrids) and machine models (synchronous generators, induction motors) can be simulated fairly easily, on sequential processes. However, running a large simulation on a single process becomes infeasible because of complexity and timing issues. Scalability becomes an increasingly important issue for larger simulations, and the platform for...

READ MORE

GraphChallenge.org: raising the bar on graph analytic performance

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. Graph Challenge 2017 received 22 submissions by 111 authors from 36 organizations. The submissions highlighted graph analytic innovations in hardware, software, algorithms, systems, and visualization. These submissions produced many comparable performance measurements that can be used for assessing the current state of the art of the field. There were numerous submissions that implemented the triangle counting challenge and resulted in over 350 distinct measurements. Analysis of these submissions show that their execution time is a strong function of the number of edges in the graph, Ne, and is typically proportional to N4=3 e for large values of Ne. Combining the model fits of the submissions presents a picture of the current state of the art of graph analysis, which is typically 108 edges processed per second for graphs with 108 edges. These results are 30 times faster than serial implementations commonly used by many graph analysts and underscore the importance of making these performance benefits available to the broader community. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs.
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

Simulation approach to sensor placement using Unity3D

Summary

3D game simulation engines have demonstrated utility in the areas of training, scientific analysis, and knowledge solicitation. This paper will make the case for the use of 3D game simulation engines in the field of sensor placement optimization. Our study used a series of parallel simulations in the Unity3D simulation framework to answer the questions: how many sensors of various modalities are required and where they should be placed to meet a desired threat detection threshold? The result is a framework that not only answers this sensor placement question, but can be easily expanded to differing optimization criteria as well as answer how a particular configuration responds to differing crowd flows or informed/non-informed adversaries. Additionally, we demonstrate the scalability of this framework by running parallel instances on a supercomputing grid and illustrate the processing speed gained.
READ LESS

Summary

3D game simulation engines have demonstrated utility in the areas of training, scientific analysis, and knowledge solicitation. This paper will make the case for the use of 3D game simulation engines in the field of sensor placement optimization. Our study used a series of parallel simulations in the Unity3D simulation...

READ MORE

Fuel production systems for remote areas via an aluminum energy vector

Author:
Published in:
Energy Fuels, Vol. 32, no. 9, 2018, pp. 9033-9042.
R&D group:

Summary

Autonomous fuel synthesis in remote locations remains the Holy Grail of fuel delivery logistics. The burdened cost of delivering fuel to remote locations is often significantly more expensive than the purchase price. Here it is shown that newly developed solid aluminum metal fuel is suited for remote production of liquid diesel fuels. On a volumetric basis, aluminum has more than twice the energy of diesel fuel, making it a superb structural energy vector for remote applications. Once aluminum is treated with gallium, water of nearly any purity is used to rapidly oxidize the aluminum metal which spontaneously evolves hydrogen and heat in roughly equal energetic quantities. The benign byproduct of the reaction could, in theory, be taken to an off-site facility and recycled back into aluminum using standard smelting processes or it could be left onsite as a high-value waste. The hydrogen can easily be used as a feedstock for diesel fuel, via Fischer-Tropsch (FT) reaction mechanisms, while the heat can be leveraged for other processes, including synthesis gas compression. It is shown that as long as a carbon source, such as diesel fuel, is already present, additional diesel can be made by recovering and recycling the CO2 in the diesel exhaust. The amount of new diesel that can be made is directly related to the fraction of available CO2 that is recovered, with 100% recovery being equivalent to doubling the diesel fuel. The volume of aluminum required to accomplish this is lower than simply bringing twice as much diesel and results in a 50% increase in volumetric energy density. That is, 50% fewer fuel convoys would be required for fuel delivery. Moreover, aluminum has the potential to be exploited as a structural fuel that can be used as pallets, containers, etc., before being consumed to produce diesel. Furthermore, FT diesel production via aluminum and CO2 can be achieved without sacrificing electrical power generation.
READ LESS

Summary

Autonomous fuel synthesis in remote locations remains the Holy Grail of fuel delivery logistics. The burdened cost of delivering fuel to remote locations is often significantly more expensive than the purchase price. Here it is shown that newly developed solid aluminum metal fuel is suited for remote production of liquid...

READ MORE

Detecting intracranial hemorrhage with deep learning

Published in:
40th Int. Conf. of the IEEE Engineering in Medicine and Biology Society, EMBC, 17-21 July 2018.

Summary

Initial results are reported on automated detection of intracranial hemorrhage from CT, which would be valuable in a computer-aided diagnosis system to help the radiologist detect subtle hemorrhages. Previous work has taken a classic approach involving multiple steps of alignment, image processing, image corrections, handcrafted feature extraction, and classification. Our current work instead uses a deep convolutional neural network to simultaneously learn features and classification, eliminating the multiple hand-tuned steps. Performance is improved by computing the mean output for rotations of the input image. Postprocessing is additionally applied to the CNN output to significantly improve specificity. The database consists of 134 CT cases (4,300 images), divided into 60, 5, and 69 cases for training, validation, and test. Each case typically includes multiple hemorrhages. Performance on the test set was 81% sensitivity per lesion (34/42 lesions) and 98% specificity per case (45/46 cases). The sensitivity is comparable to previous results (on different datasets), but with a significantly higher specificity. In addition, insights are shared to improve performance as the database is expanded.
READ LESS

Summary

Initial results are reported on automated detection of intracranial hemorrhage from CT, which would be valuable in a computer-aided diagnosis system to help the radiologist detect subtle hemorrhages. Previous work has taken a classic approach involving multiple steps of alignment, image processing, image corrections, handcrafted feature extraction, and classification. Our...

READ MORE

Adversarial co-evolution of attack and defense in a segmented computer network environment

Published in:
Proc. Genetic and Evolutionary Computation Conf. Companion, GECCO 2018, 15-19 July 2018, pp. 1648-1655.

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended defensive strategy, deployed against the threat of serial network security attacks that delay the mission of the network's operator. We employ a simulation model to investigate the effectiveness over time of different defensive strategies against different attack strategies. For a set of four network topologies, we generate strong availability attack patterns that were not identified a priori. Then, by combining the simulation with a coevolutionary algorithm to explore the adversaries' action spaces, we identify effective configurations that minimize mission delay when facing the attacks. The novel application of co-evolutionary computation to enterprise network security represents a step toward course-of-action determination that is robust to responses by intelligent adversaries.
READ LESS

Summary

In computer security, guidance is slim on how to prioritize or configure the many available defensive measures, when guidance is available at all. We show how a competitive co-evolutionary algorithm framework can identify defensive configurations that are effective against a range of attackers. We consider network segmentation, a widely recommended...

READ MORE

Learning network architectures of deep CNNs under resource constraints

Published in:
Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 18-22 June 2018, pp. 1784-91.

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully automated model learning approach that (1) treats architecture selection as part of the learning process, (2) uses a blend of broad-based random sampling and adaptive iterative refinement to explore the solution space, (3) performs optimization subject to given memory and computational constraints imposed by target deployment scenarios, and (4) is scalable and can use only a practically small number of GPUs for training. We present results that show graceful model degradation under strict resource constraints for object classification problems using CIFAR-10 in our experiments. We also discuss future work in further extending the approach.
READ LESS

Summary

Recent works in deep learning have been driven broadly by the desire to attain high accuracy on certain challenge problems. The network architecture and other hyperparameters of many published models are typically chosen by trial-and-error experiments with little considerations paid to resource constraints at deployment time. We propose a fully...

READ MORE