Publications

Refine Results

(Filters Applied) Clear All

Comparison of two-talker attention decoding from EEG with nonlinear neural networks and linear methods

Summary

Auditory attention decoding (AAD) through a brain-computer interface has had a flowering of developments since it was first introduced by Mesgarani and Chang (2012) using electrocorticograph recordings. AAD has been pursued for its potential application to hearing-aid design in which an attention-guided algorithm selects, from multiple competing acoustic sources, which should be enhanced for the listener and which should be suppressed. Traditionally, researchers have separated the AAD problem into two stages: reconstruction of a representation of the attended audio from neural signals, followed by determining the similarity between the candidate audio streams and the reconstruction. Here, we compare the traditional two-stage approach with a novel neural-network architecture that subsumes the explicit similarity step. We compare this new architecture against linear and non-linear (neural-network) baselines using both wet and dry electroencephalogram (EEG) systems. Our results indicate that the new architecture outperforms the baseline linear stimulus-reconstruction method, improving decoding accuracy from 66% to 81% using wet EEG and from 59% to 87% for dry EEG. Also of note was the finding that the dry EEG system can deliver comparable or even better results than the wet, despite the latter having one third as many EEG channels as the former. The 11-subject, wet-electrode AAD dataset for two competing, co-located talkers, the 11-subject, dry-electrode AAD dataset, and our software are available for further validation, experimentation, and modification.
READ LESS

Summary

Auditory attention decoding (AAD) through a brain-computer interface has had a flowering of developments since it was first introduced by Mesgarani and Chang (2012) using electrocorticograph recordings. AAD has been pursued for its potential application to hearing-aid design in which an attention-guided algorithm selects, from multiple competing acoustic sources, which...

READ MORE

Improving robustness to attacks against vertex classification

Published in:
15th Intl. Workshop on Mining and Learning with Graphs, 5 August 2019.

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in an attempt to misclassify a target node. This vulnerability decreases users' confidence in the learning method and can prevent adoption in high-stakes contexts. This paper presents work in progress aiming to make vertex classification robust to these types of attacks. We investigate two aspects of this problem: (1) the classification model and (2) the method for selecting training data. Our alternative classifier is a support vector machine (with a radial basis function kernel), which is applied to an augmented node feature-vector obtained by appending the node’s attributes to a Euclidean vector representing the node based on the graph structure. Our alternative methods of selecting training data are (1) to select the highest-degree nodes in each class and (2) to iteratively select the node with the most neighbors minimally connected to the training set. In the datasets on which the original attack was demonstrated, we show that changing the training set can make the network much harder to attack. To maintain a given probability of attack success, the adversary must use far more perturbations; often a factor of 2–4 over the random training baseline. Even in cases where success is relatively easy for the attacker, we show that the classification and training alternatives allow classification performance to degrade much more gradually, with weaker incorrect predictions for the attacked nodes.
READ LESS

Summary

Vertex classification—the problem of identifying the class labels of nodes in a graph—has applicability in a wide variety of domains. Examples include classifying subject areas of papers in citation networks or roles of machines in a computer network. Recent work has demonstrated that vertex classification using graph convolutional networks is...

READ MORE

The Human Trafficking Technology Roadmap: A Targeted Development Strategy for the Department of Homeland Security(9.16 MB)

Summary

Human trafficking is a form of modern-day slavery that involves the use of force, fraud, or coercion for the purposes of involuntary labor and sexual exploitation. It affects tens of million of victims worldwide and generates tens of billions of dollars in illicit pro fits annually. While agencies across the U.S. Government employ a diverse range of resources to combat human trafficking in the U.S. and abroad, trafficking operations remain challenging to measure, investigate, and interdict. Within the Department of Homeland Security, the Science and Technology Directorate is addressing these challenges by incorporating computational social science research into their counter-human trafficking approach. As part of this approach, the Directorate tasked an interdisciplinary team of national security researchers at the Massachusetts Institute of Technology's Lincoln Laboratory, a federally funded research and development center, to undertake a detailed examination of the human trafficking response across the Homeland Security Enterprise. The first phase of this effort was a government-wide systems analysis of major counter-trafficking thrust areas, including law enforcement and prosecution; public health and emergency medicine; victim services; and policy and legislation. The second phase built on this systems analysis to develop a human trafficking technology roadmap and implementation strategy for the Science and Technology Directorate, which is presented in this document.
READ LESS

Summary

Human trafficking is a form of modern-day slavery that involves the use of force, fraud, or coercion for the purposes of involuntary labor and sexual exploitation. It affects tens of million of victims worldwide and generates tens of billions of dollars in illicit pro fits annually. While agencies across the U.S...

READ MORE

The Human Trafficking Technology Roadmap: a targeted development strategy for the Department of Homeland Security

Summary

Human trafficking is a form of modern-day slavery that involves the use of force, fraud, or coercion for the purposes of involuntary labor and sexual exploitation. It affects tens of million of victims worldwide and generates tens of billions of dollars in illicit profits annually. While agencies across the U.S. Government employ a diverse range of resources to combat human trafficking in the U.S. and abroad, trafficking operations remain challenging to measure, investigate, and interdict. Within the Department of Homeland Security, the Science and Technology Directorate is addressing these challenges by incorporating computational social science research into their counter-human trafficking approach. As part of this approach, the Directorate tasked an interdisciplinary team of national security researchers at the Massachusetts Institute of Technology's Lincoln Laboratory, a federally funded research and development center, to undertake a detailed examination of the human trafficking response across the Homeland Security Enterprise. The first phase of this effort was a government-wide systems analysis of major counter-trafficking thrust areas, including law enforcement and prosecution; public health and emergency medicine; victim services; and policy and legislation. The second phase built on this systems analysis to develop a human trafficking technology roadmap and implementation strategy for the Science and Technology Directorate, which is presented in this document.
READ LESS

Summary

Human trafficking is a form of modern-day slavery that involves the use of force, fraud, or coercion for the purposes of involuntary labor and sexual exploitation. It affects tens of million of victims worldwide and generates tens of billions of dollars in illicit profits annually. While agencies across the U.S...

READ MORE

A compact end cryptographic unit for tactical unmanned systems

Summary

Under the Navy's Flexible Cyber-Secure Radio (FlexCSR) program, the Naval Information Warfare Center Pacific and the Massachusetts Institute of Technology's Lincoln Laboratory are jointly developing a unique cybersecurity solution for tactical unmanned systems (UxS): the FlexCSR Security/Cyber Module (SCM) End Cryptographic Unit (ECU). To deal with possible loss of unmanned systems that contain the device, the SCM ECU uses only publicly available Commercial National Security Algorithms and a Tactical Key Management system to generate and distribute onboard mission keys that are destroyed at mission completion or upon compromise. This also significantly reduces the logistic complexity traditionally involved with protection and loading of classified cryptographic keys. The SCM ECU is on track to be certified by the National Security Agency for protecting tactical data-in-transit up to Secret level. The FlexCSR SCM ECU is the first stand-alone cryptographic module that conforms to the United States Department of Defense (DoD) Joint Communications Architecture for Unmanned Systems, an initiative by the Office of the Secretary of Defense supporting the interoperability pillar of the DoD Unmanned Systems Integrated Roadmap. It is a credit card-sized enclosed unit that provides USB interfaces for plaintext and ciphertext, support for radio controls and management, and a software Application Programming Interface that together allow easy integration into tactical UxS communication systems. This paper gives an overview of the architecture, interfaces, usage, and development and approval schedule of the device.
READ LESS

Summary

Under the Navy's Flexible Cyber-Secure Radio (FlexCSR) program, the Naval Information Warfare Center Pacific and the Massachusetts Institute of Technology's Lincoln Laboratory are jointly developing a unique cybersecurity solution for tactical unmanned systems (UxS): the FlexCSR Security/Cyber Module (SCM) End Cryptographic Unit (ECU). To deal with possible loss of unmanned...

READ MORE

Supporting security sensitive tenants in a bare-metal cloud

Summary

Bolted is a new architecture for bare-metal clouds that enables tenants to control tradeoffs between security, price, and performance. Security-sensitive tenants can minimize their trust in the public cloud provider and achieve similar levels of security and control that they can obtain in their own private data centers. At the same time, Bolted neither imposes overhead on tenants that are security insensitive nor compromises the flexibility or operational efficiency of the provider. Our prototype exploits a novel provisioning system and specialized firmware to enable elasticity similar to virtualized clouds. Experimentally we quantify the cost of different levels of security for a variety of workloads and demonstrate the value of giving control to the tenant.
READ LESS

Summary

Bolted is a new architecture for bare-metal clouds that enables tenants to control tradeoffs between security, price, and performance. Security-sensitive tenants can minimize their trust in the public cloud provider and achieve similar levels of security and control that they can obtain in their own private data centers. At the...

READ MORE

Control-flow integrity for real-time embedded systems

Published in:
31st Euromicro Conf. on Real-Time Systems, ECRTS, 9-12 July 2019.

Summary

Attacks on real-time embedded systems can endanger lives and critical infrastructure. Despite this, techniques for securing embedded systems software have not been widely studied. Many existing security techniques for general-purpose computers rely on assumptions that do not hold in the embedded case. This paper focuses on one such technique, control-flow integrity (CFI), that has been vetted as an effective countermeasure against control-flow hijacking attacks on general-purpose computing systems. Without the process isolation and fine-grained memory protections provided by a general-purpose computer with a rich operating system, CFI cannot provide any security guarantees. This work proposes RECFISH, a system for providing CFI guarantees on ARM Cortex-R devices running minimal real-time operating systems. We provide techniques for protecting runtime structures, isolating processes, and instrumenting compiled ARM binaries with CFI protection. We empirically evaluate RECFISH and its performance implications for real-time systems. Our results suggest RECFISH can be directly applied to binaries without compromising real-time performance; in a test of over six million realistic task systems running FreeRTOS, 85% were still schedulable after adding RECFISH.
READ LESS

Summary

Attacks on real-time embedded systems can endanger lives and critical infrastructure. Despite this, techniques for securing embedded systems software have not been widely studied. Many existing security techniques for general-purpose computers rely on assumptions that do not hold in the embedded case. This paper focuses on one such technique, control-flow...

READ MORE

New software helps users build resilient, cost-effective energy architectures

Published in:
Lincoln Laboratory News
R&D group:

Summary

The Energy Resilience Analysis tool lets mission owners and energy managers balance the needs of critical missions on military installations with affordability when they design energy resilience solutions.
READ LESS

Summary

The Energy Resilience Analysis tool lets mission owners and energy managers balance the needs of critical missions on military installations with affordability when they design energy resilience solutions.

READ MORE

A Framework for Evaluating Electric Power Grid Improvements in Puerto Rico(2.58 MB)

Summary

This report is motivated by the recognition that serving highly distributed electric power load in Puerto Rico during extreme events requires innovative methods. To do this, we must determine the type and locations of the most critical equipment, innovative methods, and software for operating the electrical system most effectively. It is well recognized that the existing system needs to be both hardened and further enhanced by deploying Distributed Energy Resources (DERs), solar photovoltaics (PV) in particular, and local reconfigurable microgrids to manage these newly deployed DERs. While deployment of microgrids and DERs has been advocated by many, there is little fundamental understanding how to operate Puerto Rico’s electrical system in a way that effectively uses DERs during both normal operations and grid failures. Utility companies’ traditional reliability requirements and operational risk management practices rely on excessive amounts of centralized reserve generation to anticipate failures, which increases the cost of normal operations and nullifies the potential of DERs to meet loads during grid failures. At present, no electric power utility has a ready-to-use framework that overcomes these limitations. This report seeks to fill this void.
READ LESS

Summary

This report is motivated by the recognition that serving highly distributed electric power load in Puerto Rico during extreme events requires innovative methods. To do this, we must determine the type and locations of the most critical equipment, innovative methods, and software for operating the electrical system most effectively. It...

READ MORE

A framework for evaluating electric power grid improvements in Puerto Rico

Summary

This report is motivated by the recognition that serving highly distributed electric power load in Puerto Rico during extreme events requires innovative methods. To do this, we must determine the type and locations of the most critical equipment, innovative methods, and software for operating the electrical system most effectively. It is well recognized that the existing system needs to be both hardened and further enhanced by deploying Distributed Energy Resources (DERs), solar photovoltaics (PV) in particular, and local reconfigurable microgrids to manage these newly deployed DERs. While deployment of microgrids and DERs has been advocated by many, there is little fundamental understanding how to operate Puerto Rico's electrical system in a way that effectively uses DERs during both normal operations and grid failures. Utility companies' traditional reliability requirements and operational risk management practices rely on excessive amounts of centralized reserve generation to anticipate failures, which increases the cost of normal operations and nullifies the potential of DERs to meet loads during grid failures. At present, no electric power utility has a ready-to-use framework that overcomes these limitations. This report seeks to fill this void.
READ LESS

Summary

This report is motivated by the recognition that serving highly distributed electric power load in Puerto Rico during extreme events requires innovative methods. To do this, we must determine the type and locations of the most critical equipment, innovative methods, and software for operating the electrical system most effectively. It...

READ MORE