[Snort-users] Snort against DARPA Dataset
bsravanin at ...11827...
Fri Jun 29 11:22:53 EDT 2012
I am a grad student trying to play around with Snort. I apologize in
advance for the long mail.
To figure out tuning, I am evaluating Snort (2.9.x) against the very old
1998 and 1999 DARPA datasets. In my configuration, I have turned on ALL
rules, including those that are disabled (commented) by default. I have
enabled the sfportscan preprocessor because port scans/sweeps are a major
part of the attacks<http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/docs/attacks.html>.
have a few doubts regarding my methods and findings.
1. Portscan.log: The default Snort logs do not contain sfportscan alerts.
Is this by design or can this behavior be changed? I am using the
preprocessor's logfile option for portscan-related attacks. How reliable
are the port ranges and open ports in this log? Do they identify all ports
or only a few ports?
2. Detection rates: I am using the 3-tuple (date, source IP, destination
IP) as matching criteria for portscan-related attacks (portscan.log), and
the 5-tuple (date, source IP, source port, destination IP, destination
port) as a matching criteria for all other alerts. I see more than 30% of
the labeled attacks going unidentified by Snort. Is this matching criteria
correct or in some way too liberal or stringent?
3. Ruleset: How different are the Snort subscriber's ruleset, Pulled Pork
rules, and Emerging Threats ruleset? Would the detection rates improve if I
used all rulesets together? (As I understand Snort ignores the older or
duplicate rules.) In general are older signatures (from 1998/99) ever
removed or only replaced by newer signatures in these rulesets?
4. Target-based IDS: Snort preprocessors, especially stream5
(understandably) don't seem to explicitly support very old operating
systems. Are there any guidelines in configuring for such cases? I'm just
5. Would you suggest any config changes for higher detection rates? Ideally
I would like a 100% detection rate, and tune down from that point.
6. Is it fair to test any IDS against such old datasets? Are there any
newer labeled datasets available to the public? What do, say, Snort
developers use to test against regressions?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Snort-users