Publications

Refine Results

(Filters Applied) Clear All

Computing on masked data: a high performance method for improving big data veracity

Published in:
HPEC 2014: IEEE Conf. on High Performance Extreme Computing, 9-11 September 2014.

Summary

The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Along with these standard three V's of big data, an emerging fourth "V" is veracity, which addresses the confidentiality, integrity, and availability of the data. Traditional cryptographic techniques that ensure the veracity of data can have overheads that are too large to apply to big data. This work introduces a new technique called Computing on Masked Data (CMD), which improves data veracity by allowing computations to be performed directly on masked data and ensuring that only authorized recipients can unmask the data. Using the sparse linear algebra of associative arrays, CMD can be performed with significantly less overhead than other approaches while still supporting a wide range of linear algebraic operations on the masked data. Databases with strong support of sparse operations, such as SciDB or Apache Accumulo, are ideally suited to this technique. Examples are shown for the application of CMD to a complex DNA matching algorithm and to database operations over social media data.
READ LESS

Summary

The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Along with these standard three V's of big data, an emerging fourth "V" is veracity, which addresses the confidentiality, integrity, and availability of the data. Traditional cryptographic...

READ MORE

Robust keys from physical unclonable functions

Published in:
Proc. 2014 IEEE Int. Symp. on Hardware-Oriented Security and Trust, HOST, 6-7 May 2014.

Summary

Weak physical unclonable functions (PUFs) can instantiate read-proof hardware tokens (Tuyls et al. 2006, CHES) where benign variation, such as changing temperature, yields a consistent key, but invasive attempts to learn the key destroy it. Previous approaches evaluate security by measuring how much an invasive attack changes the derived key (Pappu et al. 2002, Science). If some attack insufficiently changes the derived key, an expert must redesign the hardware. An unexplored alternative uses software to enhance token response to known physical attacks. Our approach draws on machine learning. We propose a variant of linear discriminant analysis (LDA), called PUF LDA, which reduces noise levels in PUF instances while enhancing changes from known attacks. We compare PUF LDA with standard techniques using an optical coating PUF and the following feature types: raw pixels, fast Fourier transform, short-time Fourier transform, and wavelets. We measure the true positive rate for valid detection at a 0% false positive rate (no mistakes on samples taken after an attack). PUF LDA improves the true positive rate from 50% on average (with a large variance across PUFs) to near 100%. While a well-designed physical process is irreplaceable, PUF LDA enables system designers to improve the PUF reliability-security tradeoff by incorporating attacks without redesigning the hardware token.
READ LESS

Summary

Weak physical unclonable functions (PUFs) can instantiate read-proof hardware tokens (Tuyls et al. 2006, CHES) where benign variation, such as changing temperature, yields a consistent key, but invasive attempts to learn the key destroy it. Previous approaches evaluate security by measuring how much an invasive attack changes the derived key...

READ MORE

Generating client workloads and high-fidelity network traffic for controllable, repeatable experiments in computer security

Published in:
13th Int. Symp. on Recent Advances in Intrusion Detection, 14 September 2010, pp. 218-237.

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing with client-side threats and defenses, where often user input is required as part of the experiment. In this paper, we present techniques for making experiments involving security and client-side desktop applications like web browsers, PDF readers, or host-based firewalls or intrusion detection systems more controllable and more easily repeatable. First, we present techniques for using statistical models of user behavior to drive real, binary, GUI-enabled application programs in place of a human user. Second, we present techniques based on adaptive replay of application dialog that allow us to quickly and efficiently reproduce reasonable mock-ups of remotely-hosted applications to give the illusion of Internet connectedness on an isolated testbed. We demonstrate the utility of these techniques in an example experiment comparing the system resource consumption of a Windows machine running anti-virus protection versus an unprotected system.
READ LESS

Summary

Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing...

READ MORE

Validating and restoring defense in depth using attack graphs

Summary

Defense in depth is a common strategy that uses layers of firewalls to protect Supervisory Control and Data Acquisition (SCADA) subnets and other critical resources on enterprise networks. A tool named NetSPA is presented that analyzes firewall rules and vulnerabilities to construct attack graphs. These show how inside and outside attackers can progress by successively compromising exposed vulnerable hosts with the goal of reaching critical internal targets. NetSPA generates attack graphs and automatically analyzes them to produce a small set of prioritized recommendations to restore defense in depth. Field trials on networks with up to 3,400 hosts demonstrate that firewalls often do not provide defense in depth due to misconfigurations and critical unpatched vulnerabilities on hosts. In all cases, a small number of recommendations was provided to restore defense in depth. Simulations on networks with up to 50,000 hosts demonstrate that this approach scales well to enterprise-size networks.
READ LESS

Summary

Defense in depth is a common strategy that uses layers of firewalls to protect Supervisory Control and Data Acquisition (SCADA) subnets and other critical resources on enterprise networks. A tool named NetSPA is presented that analyzes firewall rules and vulnerabilities to construct attack graphs. These show how inside and outside...

READ MORE

Securing communication of dynamic groups in dynamic network-centric environments

Summary

We developed a new approach and designed a practical solution for securing communication of dynamic groups in dynamic network-centric environments, such as airborne and terrestrial on-the-move networks. The solution is called Public Key Group Encryption (PKGE). In this paper, we define the problem of group encryption, motivate the need for decentralized group encryption services, and explain our vision for designing such services. We then describe our solution, PKGE, at a high-level, and report on the prototype implementation, performance experiments, and a demonstration with GAIM/Jabber chat.
READ LESS

Summary

We developed a new approach and designed a practical solution for securing communication of dynamic groups in dynamic network-centric environments, such as airborne and terrestrial on-the-move networks. The solution is called Public Key Group Encryption (PKGE). In this paper, we define the problem of group encryption, motivate the need for...

READ MORE

Evaluating and strengthening enterprise network security using attack graphs

Summary

Assessing the security of large enterprise networks is complex and labor intensive. Current security analysis tools typically examine only individual firewalls, routers, or hosts separately and do not comprehensively analyze overall network security. We present a new approach that uses configuration information on firewalls and vulnerability information on all network devices to build attack graphs that show how far inside and outside attackers can progress through a network by successively compromising exposed and vulnerable hosts. In addition, attack graphs are automatically analyzed to produce a small set of prioritized recommendations to enhance network security. Field trials on networks with up to 3,400 hosts demonstrate the ability to accurately identify a small number of critical stepping-stone hosts that need to be patched to protect against external attackers. Simulation studies on complex networks with more than 40,000 hosts demonstrate good scaling. This analysis can be used for many purposes, including identifying critical stepping-stone hosts to patch or protect with a firewall, comparing the security of alternating network designs, determining the security risk caused by proposed changes in firewall rules or new vulnerabilities, and identifying the most critical hosts to patch when a new vulnerability is announced. Unique aspects of this work are new attack graph generation algorithms that scale to enterprise networks with thousands of hosts, efficient approaches to determine what other hosts and ports in large networks are reachable from each individual host, automatic data importation from network vulnerability scanners and firewalls, and automatic attack graph analyses to generate recommendations.
READ LESS

Summary

Assessing the security of large enterprise networks is complex and labor intensive. Current security analysis tools typically examine only individual firewalls, routers, or hosts separately and do not comprehensively analyze overall network security. We present a new approach that uses configuration information on firewalls and vulnerability information on all network...

READ MORE

System adaptation as a trust response in tactical ad hoc networks

Published in:
IEEE MILCOM 2003, 13-16 October 2003, pp. 209-214.

Summary

While mobile ad hoc networks offer significant improvements for tactical communications, these networks are vulnerable to node capture and other forms of cyberattack. In this paper we evaluated via simulation of the impact of a passive attacker, a denial of service (DoS) attack, and a data swallowing attack. We compared two different adaptive network responses to these attacks against a baseline of no response for 10 and 20 node networks. Each response reflects a level of trust assigned to the captured node. Our simulation used a responsive variant of the ad hoc on-demand distance vector (AODV) routing algorithm and focused on the response performance. We assumed that the attacks had been detected and reported. We compared performance tradeoffs of attack, response, and network size by focusing on metrics such as "goodput", i.e., percentage of messages that reach the intended destination untainted by the captured node. We showed, for example, that under general conditions a DoS attack response should minimize attacker impact while a response to a data swallowing attack should minimize risk to the system and trust of the compromised node with most of the response benefit. We show that the best network response depends on the mission goals, network configuration, density, network performance, attacker skill, and degree of compromise.
READ LESS

Summary

While mobile ad hoc networks offer significant improvements for tactical communications, these networks are vulnerable to node capture and other forms of cyberattack. In this paper we evaluated via simulation of the impact of a passive attacker, a denial of service (DoS) attack, and a data swallowing attack. We compared...

READ MORE

Extending the DARPA off-line intrusion detection evaluations

Published in:
DARPA Information Survivability Conf. and Exposition II, 12-14 June 2001, pp. 35-45.

Summary

The 1998 and 1999 DARPA off-line intrusion detection evaluations assessed the performance of intrusion detection systems using realistic background traffic and many examples of realistic attacks. This paper discusses three extensions to these evaluations. First, the Lincoln Adaptable Real-time Information Assurance Testbed (LARIAT) has been developed to simplify intrusion detection development and evaluation. LARIAT allows researchers and operational users to rapidly configure and run real-time intrusion detection and correlation tests with robust background traffic and attacks in their laboratories. Second, "Scenario Datasets" have been crafted to provide examples of multiple component attack scenarios instead of the atomic , attacks as found in past evaluations. Third, extensive analysis of the 1999 evaluation data and results has provided understanding of many attacks, their manifestations, and the features used to detect them. This analysis will be used to develop models of attacks, intrusion detection systems, and intrusion detection system alerts. Successful models could reduce the need for expensive experimentation, allow proof-of-concept analysis and simulations, and form the foundation of a theory of intrusion detection.
READ LESS

Summary

The 1998 and 1999 DARPA off-line intrusion detection evaluations assessed the performance of intrusion detection systems using realistic background traffic and many examples of realistic attacks. This paper discusses three extensions to these evaluations. First, the Lincoln Adaptable Real-time Information Assurance Testbed (LARIAT) has been developed to simplify intrusion detection...

READ MORE

Detecting low-profile probes and novel denial-of-service attacks

Summary

Attackers use probing attacks to discover host addresses and services available on each host. Once this information is known, an attacker can then issue a denial-of-service attack against the network, a host, or a service provided by a host. These attacks prevent access to the attacked part of the network. Until recently, only simple, easily defeated mechanisms were used for detecting probe attacks. Attackers defeat these mechanisms by creating stealthy low-profile attacks that include only a few, carefully crafted packets sent over an extended period of time. Furthermore, most mechanisms do not allow intrusion analysts to trade off detection rates for false alarm rates. We present an approach to detect stealthy attacks, an architecture for achieving real-time detections with a confidence measure, and the results of evaluating the system. Since the system outputs confidence values, an analyst can trade false alarm rate against detection rate.
READ LESS

Summary

Attackers use probing attacks to discover host addresses and services available on each host. Once this information is known, an attacker can then issue a denial-of-service attack against the network, a host, or a service provided by a host. These attacks prevent access to the attacked part of the network...

READ MORE

Evaluating intrusion detection systems without attacking your friends: The 1998 DARPA intrusion detection evaluation

Summary

Intrusion detection systems monitor the use of computers and the network over which they communicate, searching for unauthorized use, anomalous behavior, and attempts to deny users, machines or portions of the network access to services. Potential users of such systems need information that is rarely found in marketing literature, including how well a given system finds intruders and how much work is required to use and maintain that system in a fully functioning network with significant daily traffic. Researchers and developers can specify which prototypical attacks can be found by their systems, but without access to the normal traffic generated by day-to-day work, they can not describe how well their systems detect real attacks while passing background traffic and avoiding false alarms. This information is critical: every declared intrusion requires time to review, regardless of whether it is a correct detection for which a real intrusion occurred, or whether it is merely a false alarm. To meet the needs of researchers, developers and ultimately system administrators we have developed the first objective, repeatable, and realistic measurement of intrusion detection system performance. Network traffic on an Air Force base was measured, characterized and subsequently simulated on an isolated network on which a few computers were used to simulate thousands of different Unix systems and hundreds of different users during periods of normal network traffic. Simulated attackers mapped the network, issued denial of service attacks, illegally gained access to systems, and obtained super-user privileges. Attack types ranged from old, well-known attacks, to new, stealthy attacks. Seven weeks of training data and two weeks of testing data were generated, filling more than 30 CD-ROMs. Methods and results from the 1998 DARPA intrusion detection evaluation will be highlighted, and preliminary plans for the 1999 evaluation will be presented.
READ LESS

Summary

Intrusion detection systems monitor the use of computers and the network over which they communicate, searching for unauthorized use, anomalous behavior, and attempts to deny users, machines or portions of the network access to services. Potential users of such systems need information that is rarely found in marketing literature, including...

READ MORE