Publications

Refine Results

(Filters Applied) Clear All

Building low-power trustworthy systems: cyber-security considerations for real-time physiological status monitoring

Summary

Real-time monitoring of physiological data can reduce the likelihood of injury in noncombat military personnel and first-responders. MIT Lincoln Laboratory is developing a tactical Real-Time Physiological Status Monitoring (RT-PSM) system architecture and reference implementation named OBAN (Open Body Area Network), the purpose of which is to provide an open, government-owned framework for integrating multiple wearable sensors and applications. The OBAN implementation accepts data from various sensors enabling calculation of physiological strain information which may be used by squad leaders or medics to assess the team's health and enhance safety and effectiveness of mission execution. Security in terms of measurement integrity, confidentiality, and authenticity is an area of interest because OBAN system components exchange sensitive data in contested environments. In this paper, we analyze potential cyber-security threats and their associated risks to a generalized version of the OBAN architecture and identify directions for future research. The threat analysis is intended to inform the development of secure RT-PSM architectures and implementations.
READ LESS

Summary

Real-time monitoring of physiological data can reduce the likelihood of injury in noncombat military personnel and first-responders. MIT Lincoln Laboratory is developing a tactical Real-Time Physiological Status Monitoring (RT-PSM) system architecture and reference implementation named OBAN (Open Body Area Network), the purpose of which is to provide an open, government-owned...

READ MORE

A data-stream classification system for investigating terrorist threats

Published in:
Proc. SPIE 9851, Next-Generation Analyst IV, 98510L (May 12, 2016); doi:10.1117/12.2224104.

Summary

The role of cyber forensics in criminal investigations has greatly increased in recent years due to the wealth of data that is collected and available to investigators. Physical forensics has also experienced a data volume and fidelity revolution due to advances in methods for DNA and trace evidence analysis. Key to extracting insight is the ability to correlate across multi-modal data, which depends critically on identifying a touch-point connecting the separate data streams. Separate data sources may be connected because they refer to the same individual, entity or event. In this paper we present a data source classification system tailored to facilitate the investigation of potential terrorist activity. This taxonomy is structured to illuminate the defining characteristics of a particular terrorist effort and designed to guide reporting to decision makers that is complete, concise, and evidence-based. The classification system has been validated and empirically utilized in the forensic analysis of a simulated terrorist activity. Next-generation analysts can use this schema to label and correlate across existing data streams, assess which critical information may be missing from the data, and identify options for collecting additional data streams to fill information gaps.
READ LESS

Summary

The role of cyber forensics in criminal investigations has greatly increased in recent years due to the wealth of data that is collected and available to investigators. Physical forensics has also experienced a data volume and fidelity revolution due to advances in methods for DNA and trace evidence analysis. Key...

READ MORE

Threat-based risk assessment for enterprise networks

Published in:
Lincoln Laboratory Journal, Vol. 22, No. 1, 2016, pp. 33-45.

Summary

Protecting enterprise networks requires continuous risk assessment that automatically identifies and prioritizes cyber security risks, enables efficient allocation of cyber security resources, and enhances protection against modern cyber threats. Lincoln Laboratory created a network security model to guide the development of such risk assessments and, for the most important cyber threats, designed practical risk metrics that can be computed automatically and continuously from security-relevant network data.
READ LESS

Summary

Protecting enterprise networks requires continuous risk assessment that automatically identifies and prioritizes cyber security risks, enables efficient allocation of cyber security resources, and enhances protection against modern cyber threats. Lincoln Laboratory created a network security model to guide the development of such risk assessments and, for the most important cyber...

READ MORE

Mission assurance as a function of scale

Published in:
36th NATO Information Systems Technology Panel, 14-16 October 2015.

Summary

Since all Department of Defense (DoD) missions depend on cyber assets and capabilities, a dynamic and accurate cyber dependency analysis is a critical component of mission assurance. Mission analysis aims to identify hosts and applications that are "mission critical" so they can be monitored, and resources preferentially allocated to mitigate risks. For missions limited in duration and scale (tactical missions), dependency analysis is possible to conceptualize in principle, although currently difficult to realize in practice. However, for missions of long duration and large scale (strategic missions), the situation is murkier. In particular, cyber researchers struggle to find technologies that will scale up to large numbers of hosts and applications, since a typical strategic DoD mission might expect to leverage a large enterprise network. In this position paper, we argue that the difficulty is fundamental: as the mission timescale becomes longer and longer, and the number of hosts associated with the mission becomes larger and larger, the mission encompasses the entire network, and mission defense becomes indistinguishable from classic network defense. Concepts generally associated with mission assurance, such as fight-through, are not well suited to these long timescales and large networks. This train of thought leads us to reconsider the concept of "scalability" as it applies to mission assurance, and suggest that a hierarchical abstraction approach be applied. Large-scale, long duration mission assurance may be treated as the interaction of many small-scale, short duration tactical missions.
READ LESS

Summary

Since all Department of Defense (DoD) missions depend on cyber assets and capabilities, a dynamic and accurate cyber dependency analysis is a critical component of mission assurance. Mission analysis aims to identify hosts and applications that are "mission critical" so they can be monitored, and resources preferentially allocated to mitigate...

READ MORE

Control jujutsu: on the weaknesses of fine-grained control flow integrity

Published in:
22nd ACM Conf. on Computer and Communications Security, 12-16 October 2015.

Summary

Control flow integrity (CFI) has been proposed as an approach to defend against control-hijacking memory corruption attacks. CFI works by assigning tags to indirect branch targets statically and checking them at runtime. Coarse-grained enforcements of CFI that use a small number of tags to improve the performance overhead have been shown to be ineffective. As a result, a number of recent efforts have focused on fine-grained enforcement of CFI as it was originally proposed. In this work, we show that even a finegrained form of CFI with unlimited number of tags and a shadow stack (to check calls and returns) is ineffective in protecting against malicious attacks. We show that many popular code bases such as Apache and Nginx use coding practices that create flexibility in their intended control flow graph (CFG) even when a strong static analyzer is used to construct the CFG. These flexibilities allow an attacker to gain control of the execution while strictly adhering to a fine-grained CFI. We then construct two proof-of-concept exploits that attack an unlimited tag CFI system with a shadow stack. We also evaluate the difficulties of generating a precise CFG using scalable static analysis for real-world applications. Finally, we perform an analysis on a number of popular applications that highlights the availability of such attacks.
READ LESS

Summary

Control flow integrity (CFI) has been proposed as an approach to defend against control-hijacking memory corruption attacks. CFI works by assigning tags to indirect branch targets statically and checking them at runtime. Coarse-grained enforcements of CFI that use a small number of tags to improve the performance overhead have been...

READ MORE

Characterizing phishing threats with natural language processing

Author:
Published in:
2015 IEEE Conf. on Communications and Network Security (CNS), 28-30 September 2015.

Summary

Spear phishing is a widespread concern in the modern network security landscape, but there are few metrics that measure the extent to which reconnaissance is performed on phishing targets. Spear phishing emails closely match the expectations of the recipient, based on details of their experiences and interests, making them a popular propagation vector for harmful malware. In this work we use Natural Language Processing techniques to investigate a specific real-world phishing campaign and quantify attributes that indicate a targeted spear phishing attack. Our phishing campaign data sample comprises 596 emails - all containing a web bug and a Curriculum Vitae (CV) PDF attachment - sent to our institution by a foreign IP space. The campaign was found to exclusively target specific demographics within our institution. Performing a semantic similarity analysis between the senders' CV attachments and the recipients' LinkedIn profiles, we conclude with high statistical certainty (p < 10^-4) that the attachments contain targeted rather than randomly selected material. Latent Semantic Analysis further demonstrates that individuals who were a primary focus of the campaign received CVs that are highly topically clustered. These findings differentiate this campaign from one that leverages random spam.
READ LESS

Summary

Spear phishing is a widespread concern in the modern network security landscape, but there are few metrics that measure the extent to which reconnaissance is performed on phishing targets. Spear phishing emails closely match the expectations of the recipient, based on details of their experiences and interests, making them a...

READ MORE

Cyber network mission dependencies

Published in:
MIT Lincoln Laboratory Report TR-1189

Summary

Cyber assets are critical to mission success in every arena of the Department of Defense. Because all DoD missions depend on cyber infrastructure, failure to secure network assets and assure the capabilities they enable will pose a fundamental risk to any defense mission. The impact of a cyber attack is not well understood by warfighters or leadership. It is critical that the DoD develop better cognizance of Cyber Network Mission Dependencies (CNMD). This report identifies the major drivers for mapping missions to network assets, introduces existing technologies in the mission-mapping landscape, and proposes directions for future development.
READ LESS

Summary

Cyber assets are critical to mission success in every arena of the Department of Defense. Because all DoD missions depend on cyber infrastructure, failure to secure network assets and assure the capabilities they enable will pose a fundamental risk to any defense mission. The impact of a cyber attack is...

READ MORE

Missing the point(er): on the effectiveness of code pointer integrity

Summary

Memory corruption attacks continue to be a major vector of attack for compromising modern systems. Numerous defenses have been proposed against memory corruption attacks, but they all have their limitations and weaknesses. Stronger defenses such as complete memory safety for legacy languages (C/C++) incur a large overhead, while weaker ones such as practical control flow integrity have been shown to be ineffective. A recent technique called code pointer integrity (CPI) promises to balance security and performance by focusing memory safety on code pointers thus preventing most control-hijacking attacks while maintaining low overhead. CPI protects access to code pointers by storing them in a safe region that is protected by instruction level isolation. On x86-32, this isolation is enforced by hardware; on x86-64 and ARM, isolation is enforced by information hiding. We show that, for architectures that do not support segmentation in which CPI relies on information hiding, CPI's safe region can be leaked and then maliciously modified by using data pointer overwrites. We implement a proof-of-concept exploit against Nginx and successfully bypass CPI implementations that rely on information hiding in 6 seconds with 13 observed crashes. We also present an attack that generates no crashes and is able to bypass CPI in 98 hours. Our attack demonstrates the importance of adequately protecting secrets in security mechanisms and the dangers of relying on difficulty of guessing without guaranteeing the absence of memory leaks.
READ LESS

Summary

Memory corruption attacks continue to be a major vector of attack for compromising modern systems. Numerous defenses have been proposed against memory corruption attacks, but they all have their limitations and weaknesses. Stronger defenses such as complete memory safety for legacy languages (C/C++) incur a large overhead, while weaker ones...

READ MORE

Quantitative evaluation of moving target technology

Published in:
HST 2015, IEEE Int. Symp. on Technologies for Homeland Security, 14-16 April 2015.

Summary

Robust, quantitative measurement of cyber technology is critically needed to measure the utility, impact and cost of cyber technologies. Our work addresses this need by developing metrics and experimental methodology for a particular type of technology, moving target technology. In this paper, we present an approach to quantitative evaluation, including methodology and metrics, results of analysis, simulation and experiments, and a series of lessons learned.
READ LESS

Summary

Robust, quantitative measurement of cyber technology is critically needed to measure the utility, impact and cost of cyber technologies. Our work addresses this need by developing metrics and experimental methodology for a particular type of technology, moving target technology. In this paper, we present an approach to quantitative evaluation, including...

READ MORE

HEtest: a homomorphic encryption testing framework

Published in:
3rd Workshop on Encrypted Computing and Applied Homomorphic Cryptography (WAHC 2015), 30 January 2015.

Summary

In this work, we present a generic open-source software framework that can evaluate the correctness and performance of homomorphic encryption software. Our framework, called HEtest, automates the entire process of a test: generation of data for testing (such as circuits and inputs), execution of a test, comparison of performance to an insecure baseline, statistical analysis of the test results, and production of a LaTeX report. To illustrate the capability of our framework, we present a case study of our analysis of the open-source HElib homomorphic encryption software. We stress though that HEtest is written in a modular fashion, so it can easily be adapted to test any homomorphic encryption software.
READ LESS

Summary

In this work, we present a generic open-source software framework that can evaluate the correctness and performance of homomorphic encryption software. Our framework, called HEtest, automates the entire process of a test: generation of data for testing (such as circuits and inputs), execution of a test, comparison of performance to...

READ MORE

Showing Results

1-10 of 20