[Snort-devel] Re: [Snort-users] snort 1.8 changes - priorities & whatnot

Max Vision vision at ...195...
Tue Apr 17 09:51:47 EDT 2001

First off, great work on these improvements!  I have some constructive 
criticism of the current scheme but I also understand that you meant for 
this to be subject to change.

Over the past week I have made a significant changes to arachNIDS in 
development which I hope to push into production in the next day or 
so.  One relevant improvement is the addition of the "priority" field.  I 
added this field so that arachNIDS could create a more intelligent 
signature export that would be sorted such that more detailed and specific 
rules would be listed first, and the more generic catch-all rules would be 
listed last.  I spent a good amount of thought on the problem, and came up 
with five levels of granularity that seem to work pretty well for 
prioritizing the rules.  The field is not meant to be included in the 
signature export, but is instead used by the database to sort the 
dynamically generated signatures during export.  I will detail this in a 
separate email when I push the updates.

Since I'm talking about recent arachNIDS updates I should also mention that 
I have added the "TargetAffected" field, which allows for sorting based on 
which rules that affect your operating environment.  Other improvements 
include multiple sets of content and uricontent rules, grouping, complete 
concordance with BlackICE, etc.  All of these fields are populated.

Back to the subject though... this new attack classification system that is 
mentioned, based on IDMEF from the IDWG of the IETF, has some issues.  Take 
the classic "phf attack" example.  In a signature that watches for the 
uricontent "phf", we can't really say that it is "attempted-recon".  The 
schema proposed/implemented should instead group the phf attack as one of 
the following:

   attempted-user,Attempted User Privilege Gain,8
   unsuccessful-user,Unsuccessful User Privilege Gain,7
   successful-user,Successful User Privilege Gain,9
   attempted-admin,Attempted Administrator Privilege Gain,10
   successful-admin,Successful Administrator Privilege Gain,11

Since this particular signature is only alerting on the attempt, and not the
success or failure response, we can rule out several and reduce this to:

   attempted-user,Attempted User Privilege Gain,8
   attempted-admin,Attempted Administrator Privilege Gain,10

But we cannot know which unless we know more about the target environment 
(in many cases the phf vulnerability would yield userid of the webserver, 
which is usually "nobody" or "www", but can also be root.)  But what if the 
server doesn't even have phf installed?  In such a case this probe might be 
misinterpreted to be attempted-recon ("is there a phf script there?") - 
though that would be incorrect classification of the vulnerability.  The 
attacker doesn't really care just whether phf is present, they care whether 
they can access it to gain privilege.  The pure act of knowing that phf is 
present is not, in and of itself, a vulnerability.  I didn't look through 
all of the rules listed in CVS, but most of the ones I saw had the wrong 
label (recon instead of access).

If there were a rule to specifically detect a "GET /cgi-bin/phf HTTP/1.0" 
(note the specific lack of exploit content after the phf uri), then it 
could be argued that the intrusion event is not the phf _attack_, but the 
phf _probe_, thus earning the label attempted-recon.  Maybe a content rule 
that specifically watches for "phf " (trailing space), or a negation rule 
would seal this.  A separate rule would then need to be made to detect the 
actual attack, which would earn a more severe attack classification, such 
as attempted-user or attempted-admin.

Any rule that can be classified as attempted-user or attempted-admin could 
also, as a subset of the attackers ability, be classified as 
attempted-recon if that is all the attacker chooses to do. If the attacker 
chooses to send "ls -alF" instead of "/usr/X11/bin/xterm -display foo:0", 
then that could be mistaken for attempted-recon, but would be inaccurately 
labeled...  the phf attack is actually attempted-user or attempted-admin. 
That is the "impact" of the "phf vulnerability".

That brings us to "impact", a field that I have intended to add for quite 
awhile, and will now do sooner rather than later (now that there is code 
support for reporting classification/priority in the default snort 
output).  I don't like the proposed settings offered in IDMEF (more on why 
below).  I was thinking of adding a field for "impact" and having the 
following options:

   system integrity (executing code, shell access, etc)
   confidentiality  (ability to read files/data)
   accountability   (disabling logging, bouncing attacks, etc)
   availability     (denial of service attacks)
   intelligence     (information gathering/recon short of confidentiality)

This needs work though, which is why I haven't implemented it yet.  For 
example, many DOS attacks might also be exploitable, violating system 
integrity.  Or vice versa, a failed exploit could be a DOS, such as an 
exploit run against a target the is not vulnerable to compromise, but does 
crash (xntpd, etc).  How can we judge whether the intention of the attacker 
was to crash the service, or exploit it?  I think this area desperately 
needs discussion and planning, and that the IDMEF list is the wrong level 
of granularity to use in classification of intrusion events. (but maybe I'm 
wrong, let's talk :)


Here is my nitpicking of the IDMEF classifications - I have added the 
number of occurrences that each is used so far in the Snort rules in CVS 
before each one:

(2) not-suspicious,Not Suspicious Traffic,0
   If an event is not suspicious, then why is your IDS alerting on it,
   and how can the intrusion event even be considered an intrusion?

(2) unknown,Unknown Traffic,1
   I am in the hard-AI camp, I don't believe in "unknown". What is this? :)

(53) bad-unknown,Potentially Bad Traffic, 2
   Here we go again with "unknown" - also, aren't all IDS events "bad"?

(440) attempted-recon,Attempted Information Leak,3
(0) successful-recon-limited,Information Leak,4

(0) successful-recon-largescale,Large Scale Information Leak,5
   This doesn't apply to a single intrusion event, this is a more broad
   reporting function and should probably not be tied to a signature.

(53) attempted-dos,Attempted Denial of Service,6
(0) successful-dos,Denial of Service,7
   How are we supposed to differentiate these?

(78) attempted-user,Attempted User Privilege Gain,8
(6) unsuccessful-user,Unsuccessful User Privilege Gain,7
(0) successful-user,Successful User Privilege Gain,9

(99) attempted-admin,Attempted Administrator Privilege Gain,10
(0) successful-admin,Successful Administrator Privilege Gain,11
   Ok. Though, to be consistent, "unsuccessful-admin" is missing..

I think that the five levels of "impact" that I intended to add cover the 
above classifications pretty well, though now that I've listed things out 
in email I'm starting to see the value of having a different classification 
for attempt, success, and failure.  Though one could quickly bloat up a 
ruleset by creating rules for the success and failure of each and ever 
probe or exploit...  hmm. I wonder what that might be like - to have a 
three rules for each intrusion event, one for the probe/exploit, one for a 
success response, and one for a failure response.  There are already some 
events/signatures indicating failed logins, etc.

I don't understand why people would spend the enormous amounts of time and 
effort to effectively create duplicate work.  Each new signature that is 
added to those flat text file lists will soon (if not already) have a 
counterpart in arachNIDS - though the arachNIDS entry will actually be a 
rich/detailed description with packet captures and details about each 
aspect of the intrusion event, and the signature will be dynamically 
created from the information and exported along with hundreds of others (I 
do not enter or create signatures, they are synthesized from the component 
data).  Why not contribute to this process at a meaningful level by 
contributing full entries to the database, instead of doing all of this 
work in a text file that *is* going to end up 
obsolete/duplicate?  arachNIDS is also likely to be the only place that 
people writing signatures will see *credit* for our work.   What can I do 
to help make arachNIDS your first stop when documenting/contributing new 
intrusion events? (aside from documentation, which is on it's way, really :)


At 02:24 AM 4/17/2001 -0400, Brian Caswell wrote:
>Just a bit of warning, there are a number of changes that will be coming with
>snort 1.8.  I have added rule classifications and priorities to snort 1.8
>(with huge amounts of help from Andrew B. and Marty).  I have added a
>classification to over 700 rules (available via CVS).
>See below for an example how it works.  If you have any further questions,
>feel free to e-mail myself or the snort-devel mailing list.
>The following rules:
>config classification: attempted-recon,Attempted Information Leak,3
>alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS 80 (msg:"WEB-CGI phf access"; \
>flags: A+; uricontent:"/phf";flags: A+; nocase; reference:arachnids,128;   \
>reference:cve,CVE-1999-0067;  classtype:attempted-recon;)
>Gives the output of:
>[**] WEB-CGI phf access [**]
>[Classification: Attempted Information Leak] [Priority: 3]
>04/17-02:04:33.861311 ->
>TCP TTL:64 TOS:0x0 ID:25894 IpLen:20 DgmLen:69 DF
>***AP*** Seq: 0x48584C02  Ack: 0x394F8C32  Win: 0x43E0  TcpLen: 32
>TCP Options (3) => NOP NOP TS: 705159029 517149464
>[Xref => http://www.whitehats.com/info/128]
>[Xref => http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-1999-0067]
>For more examples, please check
>Brian Caswell
>The MITRE Corporation

More information about the Snort-devel mailing list