Off-line intrusion detection datasets were produced as per consensus from the Wisconsin Re-think meeting and the July 2000 Hawaii PI meeting.

The content and labeling of datasets relies significantly on reports and feedback from consumers of these data. Please send feedback on this dataset to llwebmaster so that your ideas can be incorporated into future datasets. Thanks!

Overview

Off-line intrusion detection datasets were produced as per consensus from the Wisconsin Re-think meeting and the July 2000 Hawaii PI meeting.

LLDOS 1.0 - Scenario One

This is the first attack scenario dataset to be created for DARPA as a part of this effort. It includes a distributed denial-of-service attack run by a novice attacker. Future versions of this and other example scenarios will contain more stealthy attack versions.

This attack scenario is carried out over multiple network and audit sessions. These sessions have been grouped into five attack phases, over the course of which the attacker probes the network, breaks into a host by exploiting the Solaris sadmind vulnerability, installs trojan mstream DDoS software, and launches a DDoS attack at an off-site server from the compromised host.

  • ADVERSARY: Novice
  • ADVERSARY GOAL: Install components for and carry out a DDOS attack
  • DEFENDER: Naive

Data and labeling information are available for downloading.

LLDOS 2.0.2 - Scenario Two

This is the second attack scenario dataset to be created for DARPA as a part of this effort. It includes a distributed denial-of-service attack run by an attacker who is more stealthy than the attacker in the first dataset. The attacker is still considered a novice, as the attack is mostly scripted in a fashion that despite being a bit more stealthy, is still something that any attacker might be able to download and run.

This attack scenario is carried out over multiple network and audit sessions. These sessions have been grouped into five attack phases, over the course of which the attacker probes the network, breaks into a host by exploiting the Solaris sadmind vulnerability, installs trojan mstream DDoS software, and launches a DDoS attack at an off-site server from the compromised host.

  • ADVERSARY: Novice
  • ADVERSARY GOAL: Install components for and carry out a DDOS attack
  • DEFENDER: Naive

Data and labeling information are available for downloading.

Windows NT Attack Dataset

An experiment with a level of NT auditing higher than that which was run in the 1999 Evaluation was run in January 2000. Here are the collected traces of data from that run of one day's traffic and attack impinging on the NT machine. High-level labeling information for these is available now.

Note: This day contains data from 08:00 to 14:30 hours. The network sniffers collected data until 17:00.


Documentation

2000 Dataset One

  • List of hosts and operating systems used in this scenario.

Future Evaluations and Datasets

Plans for future Intrusion Detection Evaluations have been discussed. A significant effort is being made to step back and ensure that evaluations of intrusion detection technology are appropriately designed and scaled to respond to the needs of DARPA and the research community. Early stages of planning were carried out in spring 2000. Those helping with this early planning included DARPA, principal investigators (PI) in the DARPA Strategic Information Assurance (SIA) program, the Sandia Red Team, and Lincoln Laboratory. A planning workshop, titled the Evaluation Re-think Workshop, was held on 23 and 24 May in Wisconsin. Slides from the Wisconsin meeting are available on a Schafer website.

The outcome of this meeting was that in the current year, Lincoln Laboratory was tasked to produce much needed off-line intrusion detection datasets. These datasets will provide researchers with extensive examples of attacks and background traffic. The Hawaii PI meeting presentation given at the SIA PI meeting gives the goals of and a detailed plan for producing the 2000 datasets.

https://archive.ll.mit.edu/ideval/