Publications

Refine Results

(Filters Applied) Clear All

Generating client workloads and high-fidelity network traffic for controllable, repeatable experiments in computer security

Published in:
13th Int. Symp. on Recent Advances in Intrusion Detection, 14 September 2010, pp. 218-237.

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing with client-side threats and defenses, where often user input is required as part of the experiment. In this paper, we present techniques for making experiments involving security and client-side desktop applications like web browsers, PDF readers, or host-based firewalls or intrusion detection systems more controllable and more easily repeatable. First, we present techniques for using statistical models of user behavior to drive real, binary, GUI-enabled application programs in place of a human user. Second, we present techniques based on adaptive replay of application dialog that allow us to quickly and efficiently reproduce reasonable mock-ups of remotely-hosted applications to give the illusion of Internet connectedness on an isolated testbed. We demonstrate the utility of these techniques in an example experiment comparing the system resource consumption of a Windows machine running anti-virus protection versus an unprotected system.
READ LESS

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing...

READ MORE

Extending the DARPA off-line intrusion detection evaluations

Published in:
DARPA Information Survivability Conf. and Exposition II, 12-14 June 2001, pp. 35-45.

Summary

The 1998 and 1999 DARPA off-line intrusion detection evaluations assessed the performance of intrusion detection systems using realistic background traffic and many examples of realistic attacks. This paper discusses three extensions to these evaluations. First, the Lincoln Adaptable Real-time Information Assurance Testbed (LARIAT) has been developed to simplify intrusion detection development and evaluation. LARIAT allows researchers and operational users to rapidly configure and run real-time intrusion detection and correlation tests with robust background traffic and attacks in their laboratories. Second, "Scenario Datasets" have been crafted to provide examples of multiple component attack scenarios instead of the atomic , attacks as found in past evaluations. Third, extensive analysis of the 1999 evaluation data and results has provided understanding of many attacks, their manifestations, and the features used to detect them. This analysis will be used to develop models of attacks, intrusion detection systems, and intrusion detection system alerts. Successful models could reduce the need for expensive experimentation, allow proof-of-concept analysis and simulations, and form the foundation of a theory of intrusion detection.
READ LESS

Summary

The 1998 and 1999 DARPA off-line intrusion detection evaluations assessed the performance of intrusion detection systems using realistic background traffic and many examples of realistic attacks. This paper discusses three extensions to these evaluations. First, the Lincoln Adaptable Real-time Information Assurance Testbed (LARIAT) has been developed to simplify intrusion detection...

READ MORE

SARA: Survivable Autonomic Response Architecture

Published in:
DARPA Information Survivability Conf. and Exposition II, 12-14 June 2001, pp. 77-88.

Summary

This paper describes the architecture of a system being developed to defend information systems using coordinated autonomic responses. The system will also be used to test the hypothesis that an effective defense against fast, distributed information attacks requires rapid, coordinated, network-wide responses. The core components of the architecture are a run-time infrastructure (RTI), a communication language, a system model, and defensive components. The RTI incorporates a number of innovative design concepts and provides fast, reliable, exploitation-resistant communication and coordination services to the components defending the network, even when challenged by a distributed attack. The architecture can be tailored to provide scalable information assurance defenses for large, geographically distributed, heterogeneous networks with multiple domains, each of which uses different technologies and requires different policies. The architecture can form the basis of a field-deployable system. An initial version is being developed for evaluation in a testbed that will be used to test the autonomic coordination and response hypothesis.
READ LESS

Summary

This paper describes the architecture of a system being developed to defend information systems using coordinated autonomic responses. The system will also be used to test the hypothesis that an effective defense against fast, distributed information attacks requires rapid, coordinated, network-wide responses. The core components of the architecture are a...

READ MORE

Showing Results

1-3 of 3