Publications

Refine Results

(Filters Applied) Clear All

Effective Entropy: security-centric metric for memory randomization techniques

Published in:
Proc. 7th USENIX Conf. on Cyber Security Experimentation and Test, CSET, 20 August 2014.

Summary

User space memory randomization techniques are an emerging field of cyber defensive technology which attempts to protect computing systems by randomizing the layout of memory. Quantitative metrics are needed to evaluate their effectiveness at securing systems against modern adversaries and to compare between randomization technologies. We introduce Effective Entropy, a measure of entropy in user space memory which quantitatively considers an adversary's ability to leverage low entropy regions of memory via absolute and dynamic intersection connections. Effective Entropy is indicative of adversary workload and enables comparison between different randomization techniques. Using Effective Entropy, we present a comparison of static Address Space Layout Randomization (ASLR), Position Independent Executable (PIE) ASLR, and a theoretical fine grain randomization technique.
READ LESS

Summary

User space memory randomization techniques are an emerging field of cyber defensive technology which attempts to protect computing systems by randomizing the layout of memory. Quantitative metrics are needed to evaluate their effectiveness at securing systems against modern adversaries and to compare between randomization technologies. We introduce Effective Entropy, a...

READ MORE

Adaptive attacker strategy development against moving target cyber defenses

Summary

A model of strategy formulation is used to study how an adaptive attacker learns to overcome a moving target cyber defense. The attacker-defender interaction is modeled as a game in which a defender deploys a temporal platform migration defense. Against this defense, a population of attackers develop strategies specifying the temporal ordering of resource investments that bring targeted zero-day exploits into existence. Attacker response to two defender temporal platform migration scheduling policies are examined. In the first defender scheduling policy, the defender selects the active platform in each match uniformly at random from a pool of available platforms. In the second policy the defender schedules each successive platform to maximize the diversity of the source code presented to the attacker. Adaptive attacker response strategies are modeled by finite state machine (FSM) constructs that evolve during simulated play against defender strategies via an evolutionary algorithm. It is demonstrated that the attacker learns to invest heavily in exploit creation for the platform with the least similarity to other platforms when faced with a diversity defense, while avoiding investment in exploits for this least similar platform when facing a randomization defense. Additionally, it is demonstrated that the diversity-maximizing defense is superior for shorter duration attacker-defender engagements, but performs sub-optimally in extended attacker-defender interactions.
READ LESS

Summary

A model of strategy formulation is used to study how an adaptive attacker learns to overcome a moving target cyber defense. The attacker-defender interaction is modeled as a game in which a defender deploys a temporal platform migration defense. Against this defense, a population of attackers develop strategies specifying the...

READ MORE

An Expectation Maximization Approach to Detecting Compromised Remote Access Accounts(267.16 KB)

Published in:
Proceedings of FLAIRS 2013, St. Pete Beach, Fla.

Summary

Just as credit-card companies are able to detect aberrant transactions on a customer’s credit card, it would be useful to have methods that could automatically detect when a user’s login credentials for Virtual Private Network (VPN) access have been compromised. We present here a novel method for detecting that a VPN account has been compromised, in a manner that bootstraps a model of the second unauthorized user.
READ LESS

Summary

Just as credit-card companies are able to detect aberrant transactions on a customer’s credit card, it would be useful to have methods that could automatically detect when a user’s login credentials for Virtual Private Network (VPN) access have been compromised. We present here a novel method for detecting that a...

READ MORE

Architecture-independent dynamic information flow tracking

Author:
Published in:
CC 2013: 22nd Int. Conf. on Compiler Construction, 16-24 March 2013, pp. 144-163.

Summary

Dynamic information flow tracking is a well-known dynamic software analysis technique with a wide variety of applications that range from making systems more secure, to helping developers and analysts better understand the code that systems are executing. Traditionally, the fine-grained analysis capabilities that are desired for the class of these systems which operate at the binary level require tight coupling to a specific ISA. This places a heavy burden on developers of these systems since significant domain knowledge is required to support each ISA, and the ability to amortize the effort expended on one ISA implementation cannot be leveraged to support other ISAs. Further, the correctness of the system must carefully evaluated for each new ISA. In this paper, we present a general approach to information flow tracking that allows us to support multiple ISAs without mastering the intricate details of each ISA we support, and without extensive verification. Our approach leverages binary translation to an intermediate representation where we have developed detailed, architecture-neutral information flow models. To support advanced instructions that are typically implemented in C code in binary translators, we also present a combined static/dynamic analysis that allows us to accurately and automatically support these instructions. We demonstrate the utility of our system in three different application settings: enforcing information flow policies, classifying algorithms by information flow properties, and characterizing types of programs which may exhibit excessive information flow in an information flow tracking system.
READ LESS

Summary

Dynamic information flow tracking is a well-known dynamic software analysis technique with a wide variety of applications that range from making systems more secure, to helping developers and analysts better understand the code that systems are executing. Traditionally, the fine-grained analysis capabilities that are desired for the class of these...

READ MORE

Experiences in cyber security education: the MIT Lincoln Laboratory Capture-the-Flag exercise

Published in:
Proc. 4th Cyber Security Experimentation Test, 8 August 2011.

Summary

Many popular and well-established cyber security Capture the Flag (CTF) exercises are held each year in a variety of settings, including universities and semi-professional security conferences. CTF formats also vary greatly, ranging from linear puzzle-like challenges to team-based offensive and defensive free-for-all hacking competitions. While these events are exciting and important as contests of skill, they offer limited educational opportunities. In particular, since participation requires considerable a priori domain knowledge and practical computer security expertise, the majority of typical computer science students are excluded from taking part in these events. Our goal in designing and running the MIT/LL CTF was to make the experience accessible to a wider community by providing an environment that would not only test and challenge the computer security skills of the participants, but also educate and prepare those without an extensive prior expertise. This paper describes our experience in designing, organizing, and running an education-focused CTF, and discusses our teaching methods, game design, scoring measures, logged data, and lessons learned.
READ LESS

Summary

Many popular and well-established cyber security Capture the Flag (CTF) exercises are held each year in a variety of settings, including universities and semi-professional security conferences. CTF formats also vary greatly, ranging from linear puzzle-like challenges to team-based offensive and defensive free-for-all hacking competitions. While these events are exciting and...

READ MORE

Virtuoso: narrowing the semantic gap in virtual machine introspection

Published in:
2011 IEEE Symp. on Security and Privacy, 22-25 May 2011, pp. 297-312.

Summary

Introspection has featured prominently in many recent security solutions, such as virtual machine-based intrusion detection, forensic memory analysis, and low-artifact malware analysis. Widespread adoption of these approaches, however, has been hampered by the semantic gap: in order to extract meaningful information about the current state of a virtual machine, detailed knowledge of the guest operating system's inner workings is required. In this paper, we present a novel approach for automatically creating introspection tools for security applications with minimal human effort. By analyzing dynamic traces of small, in-guest programs that compute the desired introspection information, we can produce new programs that retrieve the same information from outside the guest virtual machine. We demonstrate the efficacy of our techniques by automatically generating 17 programs that retrieve security information across 3 different operating systems, and show that their functionality is unaffected by the compromise of the guest system. Our technique allows introspection tools to be effortlessly generated for multiple platforms, and enables the development of rich introspection-based security applications.
READ LESS

Summary

Introspection has featured prominently in many recent security solutions, such as virtual machine-based intrusion detection, forensic memory analysis, and low-artifact malware analysis. Widespread adoption of these approaches, however, has been hampered by the semantic gap: in order to extract meaningful information about the current state of a virtual machine, detailed...

READ MORE

Achieving cyber survivability in a contested environment using a cyber moving target

Published in:
High Frontier, Vol. 7, No. 3, May 2011, pp. 9-13.

Summary

We describe two components for achieving cyber survivability in a contested environment: an architectural component that provides heterogeneous computing platforms and an assessment technology that complements the architectural component by analyzing the threat space and triggering reorientation based on the evolving threat level. Together, these technologies provide a cyber moving target that dynamically changes the properties of the system to disadvantage the adversary and provide resiliency and survivability.
READ LESS

Summary

We describe two components for achieving cyber survivability in a contested environment: an architectural component that provides heterogeneous computing platforms and an assessment technology that complements the architectural component by analyzing the threat space and triggering reorientation based on the evolving threat level. Together, these technologies provide a cyber moving...

READ MORE

Generating client workloads and high-fidelity network traffic for controllable, repeatable experiments in computer security

Published in:
13th Int. Symp. on Recent Advances in Intrusion Detection, 14 September 2010, pp. 218-237.

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing with client-side threats and defenses, where often user input is required as part of the experiment. In this paper, we present techniques for making experiments involving security and client-side desktop applications like web browsers, PDF readers, or host-based firewalls or intrusion detection systems more controllable and more easily repeatable. First, we present techniques for using statistical models of user behavior to drive real, binary, GUI-enabled application programs in place of a human user. Second, we present techniques based on adaptive replay of application dialog that allow us to quickly and efficiently reproduce reasonable mock-ups of remotely-hosted applications to give the illusion of Internet connectedness on an isolated testbed. We demonstrate the utility of these techniques in an example experiment comparing the system resource consumption of a Windows machine running anti-virus protection versus an unprotected system.
READ LESS

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing...

READ MORE

Proficiency testing for imaging and audio enhancement: guidelines for evaluation

Published in:
Int. Assoc. of Forensic Sciences, IAFS, 21-26 July 2008.

Summary

Proficiency tests in the forensic sciences are vital in the accreditation and quality assurance process. Most commercially available proficiency testing is available for examiners in the traditional forensic disciplines, such as latent prints, drug analysis, DNA, questioned documents, etc. Each of these disciplines is identification based. There are other forensic disciplines, however, where the output of the examination is not an identification of a person or substance. Two such disciplines are audio enhancement and video/image enhancement.
READ LESS

Summary

Proficiency tests in the forensic sciences are vital in the accreditation and quality assurance process. Most commercially available proficiency testing is available for examiners in the traditional forensic disciplines, such as latent prints, drug analysis, DNA, questioned documents, etc. Each of these disciplines is identification based. There are other forensic...

READ MORE

Bridging the gap between linguists and technology developers: large-scale, sociolinguistic annotation for dialect and speaker recognition

Published in:
Proc. 6th Int. Conf. on Language Resources and Evaluation, LREC, 28 May 2008.

Summary

Recent years have seen increased interest within the speaker recognition community in high-level features including, for example, lexical choice, idiomatic expressions or syntactic structures. The promise of speaker recognition in forensic applications drives development toward systems robust to channel differences by selecting features inherently robust to channel difference. Within the language recognition community, there is growing interest in differentiating not only languages but also mutually intelligible dialects of a single language. Decades of research in dialectology suggest that high-level features can enable systems to cluster speakers according to the dialects they speak. The Phanotics (Phonetic Annotation of Typicality in Conversational Speech) project seeks to identify high-level features characteristic of American dialects, annotate a corpus for these features, use the data to dialect recognition systems and also use the categorization to create better models for speaker recognition. The data, once published, should be useful to other developers of speaker and dialect recognition systems and to dialectologists and sociolinguists. We expect the methods will generalize well beyond the speakers, dialects, and languages discussed here and should, if successful, provide a model for how linguists and technology developers can collaborate in the future for the benefit of both groups and toward a deeper understanding of how languages vary and change.
READ LESS

Summary

Recent years have seen increased interest within the speaker recognition community in high-level features including, for example, lexical choice, idiomatic expressions or syntactic structures. The promise of speaker recognition in forensic applications drives development toward systems robust to channel differences by selecting features inherently robust to channel difference. Within the...

READ MORE