Publications

Refine Results

(Filters Applied) Clear All

Probabilistic reasoning for streaming anomaly detection

Published in:
2012 SSP: 2012 IEEE Statistical Signal Processing Workshop, 5-8 August 2012, pp. 377-380.

Summary

In many applications it is necessary to determine whether an observation from an incoming high-volume data stream matches expectations or is anomalous. A common method for performing this task is to use an Exponentially Weighted Moving Average (EWMA), which smooths out the minor variations of the data stream. While EWMA is efficient at processing high-rate streams, it can be very volatile to abrupt transient changes in the data, losing utility for appropriately detecting anomalies. In this paper we present a probabilistic approach to EWMA which dynamically adapts the weighting based on the observation probability. This results in robustness to data anomalies yet quick adaptability to distributional data shifts.
READ LESS

Summary

In many applications it is necessary to determine whether an observation from an incoming high-volume data stream matches expectations or is anomalous. A common method for performing this task is to use an Exponentially Weighted Moving Average (EWMA), which smooths out the minor variations of the data stream. While EWMA...

READ MORE

Cyber situational awareness through operational streaming analysis

Published in:
MILCOM 2011, IEEE Military Communications Conf., 7-10 November 2011, pp. 1152-1157.

Summary

As the scope and scale of Internet traffic continue to increase the task of maintaining cyber situational awareness about this traffic becomes ever more difficult. There is strong need for real-time on-line algorithms that characterize high-speed / high-volume data to support relevant situational awareness. Recently, much work has been done to create and improve analysis algorithms that operate in a streaming fashion (minimal CPU and memory utilization) in order to calculate important summary statistics (moments) of this network data for the purpose of characterization. While the research literature contains improvements to streaming algorithms in terms of efficiency and accuracy (i.e. approximation with error bounds), the literature lacks research results that demonstrate streaming algorithms in operational situations. The focus of our work is the development of a live network situational awareness system that relies upon streaming algorithms for the determination of important stream characterizations and also for the detection of anomalous behavior. We present our system and discuss its applicability to situational awareness of high-speed networks. We present refinements and enhancements that we have made to a well-known streaming algorithm and improve its performance as applied within our system. We also present performance and detection results of the system when it is applied to a live high-speed mid-scale enterprise network.
READ LESS

Summary

As the scope and scale of Internet traffic continue to increase the task of maintaining cyber situational awareness about this traffic becomes ever more difficult. There is strong need for real-time on-line algorithms that characterize high-speed / high-volume data to support relevant situational awareness. Recently, much work has been done...

READ MORE

Experience using active and passive mapping for network situational awareness

Published in:
5th IEEE Int. Symp. on Network Computing and Applications NCA06, 24-26 July 2006, pp. 19-26.

Summary

Passive network mapping has often been proposed as an approach to maintain up-to-date information on networks between active scans. This paper presents a comparison of active and passive mapping on an operational network. On this network, active and passive tools found largely disjoint sets of services and the passive system took weeks to discover the last 15% of active services. Active and passive mapping tools provided different, not complimentary information. Deploying passive mapping on an enterprise network does not reduce the need for timely active scans due to non-overlapping coverage and potentially long discovery times.
READ LESS

Summary

Passive network mapping has often been proposed as an approach to maintain up-to-date information on networks between active scans. This paper presents a comparison of active and passive mapping on an operational network. On this network, active and passive tools found largely disjoint sets of services and the passive system...

READ MORE

The effect of identifying vulnerabilities and patching software on the utility of network intrusion detection

Published in:
Proc. 5th Int. Symp. on Recent Advances in Intrusion Detection, RAID 2002, 16-18 October 2002, pp. 307-326.

Summary

Vulnerability scanning and installing software patches for known vulnerabilities greatly affects the utility of network-based intrusion detection systems that use signatures to detect system compromises. A detailed timeline analysis of important remote-to-local vulnerabilities demonstrates (1) vulnerabilities in widely-used server software are discovered infrequently (at most 6 times a year) and (2) Software patches to prevent vulnerabilities from being exploited are available before or simultaneously with signatures. Signature-based intrusion detection systems will thus never detect successful system compromises on small secure sites when patches are installed as soon as they are available. Network intrusion detection systems may detect successful system compromises on large sites where it is impractical to eliminate all known vulnerabilities. On such sites, information from vulnerability scanning can be used to prioritize the large numbers of extraneous alerts caused by failed attacks and normal background MIC. On one class B network with roughly 10 web servers, this approach successfully filtered out 95% of all remote-to-local alerts.
READ LESS

Summary

Vulnerability scanning and installing software patches for known vulnerabilities greatly affects the utility of network-based intrusion detection systems that use signatures to detect system compromises. A detailed timeline analysis of important remote-to-local vulnerabilities demonstrates (1) vulnerabilities in widely-used server software are discovered infrequently (at most 6 times a year) and...

READ MORE

Detecting low-profile probes and novel denial-of-service attacks

Summary

Attackers use probing attacks to discover host addresses and services available on each host. Once this information is known, an attacker can then issue a denial-of-service attack against the network, a host, or a service provided by a host. These attacks prevent access to the attacked part of the network. Until recently, only simple, easily defeated mechanisms were used for detecting probe attacks. Attackers defeat these mechanisms by creating stealthy low-profile attacks that include only a few, carefully crafted packets sent over an extended period of time. Furthermore, most mechanisms do not allow intrusion analysts to trade off detection rates for false alarm rates. We present an approach to detect stealthy attacks, an architecture for achieving real-time detections with a confidence measure, and the results of evaluating the system. Since the system outputs confidence values, an analyst can trade false alarm rate against detection rate.
READ LESS

Summary

Attackers use probing attacks to discover host addresses and services available on each host. Once this information is known, an attacker can then issue a denial-of-service attack against the network, a host, or a service provided by a host. These attacks prevent access to the attacked part of the network...

READ MORE

Analysis and results of the 1999 DARPA off-line intrusion detection evaluation

Published in:
Proc. Recent Advances in Intrusion Detection, RAID, 2-4 October 2000, pp. 162-182.

Summary

Eight sites participated in the second DARPA off-line intrusion detection evaluation in 1999. Three weeks of training and two weeks of test data were generated on a test bed that emulates a small government site. More than 200 instances of 58 attack types were launched against victim UNIX and Windows NT hosts. False alarm rates were low (less than 10 per day). Best detection was provided by network-based systems for old probe and old denial-of-service (DOS) attacks and by host-based systems for Solaris user-to-root (U2R) attacks. Best over-all performance would have been provided by a combined system that used both host- and network-based intrusion detection. Detection accuracy was poor for previously unseen new, stealthy, and Windows NT attacks. Ten of the 58 attack types were completely missed by all systems. Systems missed attacks because protocols and TCP services were not analyzed at all or to the depth required, because signatures for old attacks did not generalize to new attacks, and because auditing was not available on all hosts.
READ LESS

Summary

Eight sites participated in the second DARPA off-line intrusion detection evaluation in 1999. Three weeks of training and two weeks of test data were generated on a test bed that emulates a small government site. More than 200 instances of 58 attack types were launched against victim UNIX and Windows...

READ MORE

The 1999 DARPA Off-Line Intrusion Detection Evaluation

Published in:
Comput. Networks, Vol. 34, No. 4, October 2000, pp. 579-595.

Summary

Eight sites participated in the second Defense Advanced Research Projects Agency (DARPA) off-line intrusion detection evaluation in 1999. A test bed generated live background traffic similar to that on a government site containing hundreds of users on thousands of hosts. More than 200 instances of 58 attack types were launched against victim UNIX and Windows NT hosts in three weeks of training data and two weeks of test data. False-alarm rates were low (less than 10 per day). The best detection was provided by network-based systems for old probe and old denial-of-service (DOS) attacks and by host-based systems for Solaris user-to-root (U2R) attacks. The best overall performance would have been provided by a combined system that used both host- and network-based intrusion detection. Detection accuracy was poor for previously unseen, new, stealthy and Windows NT attacks. Ten of the 58 attack types were completely missed by all systems. Systems missed attacks because signatures for old attacks did not generalize to new attacks, auditing was not available on all hosts, and protocols and TCP services were not analyzed at all or to the depth required. Promising capabilities were demonstrated by host-based systems, anomaly detection systems and a system that performs forensic analysis on file system data.
READ LESS

Summary

Eight sites participated in the second Defense Advanced Research Projects Agency (DARPA) off-line intrusion detection evaluation in 1999. A test bed generated live background traffic similar to that on a government site containing hundreds of users on thousands of hosts. More than 200 instances of 58 attack types were launched...

READ MORE

Evaluating intrusion detection systems without attacking your friends: The 1998 DARPA intrusion detection evaluation

Summary

Intrusion detection systems monitor the use of computers and the network over which they communicate, searching for unauthorized use, anomalous behavior, and attempts to deny users, machines or portions of the network access to services. Potential users of such systems need information that is rarely found in marketing literature, including how well a given system finds intruders and how much work is required to use and maintain that system in a fully functioning network with significant daily traffic. Researchers and developers can specify which prototypical attacks can be found by their systems, but without access to the normal traffic generated by day-to-day work, they can not describe how well their systems detect real attacks while passing background traffic and avoiding false alarms. This information is critical: every declared intrusion requires time to review, regardless of whether it is a correct detection for which a real intrusion occurred, or whether it is merely a false alarm. To meet the needs of researchers, developers and ultimately system administrators we have developed the first objective, repeatable, and realistic measurement of intrusion detection system performance. Network traffic on an Air Force base was measured, characterized and subsequently simulated on an isolated network on which a few computers were used to simulate thousands of different Unix systems and hundreds of different users during periods of normal network traffic. Simulated attackers mapped the network, issued denial of service attacks, illegally gained access to systems, and obtained super-user privileges. Attack types ranged from old, well-known attacks, to new, stealthy attacks. Seven weeks of training data and two weeks of testing data were generated, filling more than 30 CD-ROMs. Methods and results from the 1998 DARPA intrusion detection evaluation will be highlighted, and preliminary plans for the 1999 evaluation will be presented.
READ LESS

Summary

Intrusion detection systems monitor the use of computers and the network over which they communicate, searching for unauthorized use, anomalous behavior, and attempts to deny users, machines or portions of the network access to services. Potential users of such systems need information that is rarely found in marketing literature, including...

READ MORE

Showing Results

1-8 of 8