[Snort-users] order of matching rules
chris at ...7037...
Tue Oct 22 09:08:14 EDT 2002
> As for snort-ng I'll admit I've not examined their software or site to any
> great degree, but I'd be very cautious of accepting that it's an
> improvement in the general case.
Actually, we have omitted any attempt to make our system appear faster. We did
not measure some virtual time it takes to process a packet - instead we even
sniffed live network traffic to simulate that load as well.
> The graphs in the snort-ng site compare a percentage of matches as the
> "number of rules" increases, a reasonable evaluation, but no data is
> provided as to what rules were used. Obviously I can sit down and create my
> own set of rules which penalizes the existing structure of snort heavily,
> and then come up with an "optimization" which improves this case but does
> not improve a realistic setup.
True - but that has not happened. We have aggregated all available snort rules
in a way similar to 'cat *.rules >> big_rules_file.rules'. And we used the
standard rule set that is shipped with Snort-1.8.7.
For our tests, we simply added more and more rules from that file - without
any hand tuning. The order is simply determined by the position of a rule in
a rule file (as it has been shipped) and the order of the rule files by the
sorting obtained from 'cat *.rules'.
> I'm not trying to accuse snort-ng of "rigging the test" but given the data
> present on the snort-ng site and in their whitepaper, it's hard to decide
> if their improvements are realistic, or if their test is somehow biased to
Thank you, but I suggest you simply download it and try it on your network
traffic. Any information about your results would be appreciated.
> I'm also skeptical of the stability of this code given that the original
> code did not handle ip fragments without a segfault. They've fixed that
> since, but does the additional check degrade the performance? what other
> sanity and stability checks have been skipped to get more impressive graphs?
That was a simply mistake when we copied parts of the original Snort code and
changed it to call our functions. The checks that had to be inserted add up
to 3 (!) lines of code that cause no performance impact at all. In addition,
we rely on the original Snort code to do all sanity checking, so there is no
omission of sanity and stability checks.
> also love to see some memory consumption comparisons. If the new rule
> processor consumes significantly more memory in some scenarios this may
> wind up reducing the amount of memory available for disk buffering, causing
> a detriment to the logging end and possibly degrading the total performance
> of the system to be worse than the original.
This is a good point, we obviously consume more memory. Such graphs will be
included in the next version of the paper.
> I'd also susupect that the added log output of matching every packet
> against every rule will generate a considerably greater amount of log
> output, resulting in dramatically worse performance when using the default
> snort ruleset, negating the all the performance gain of the faster matching
> code until a new ruleset which is oriented towards the new snort-ng is
I really doubt that. Usually, the vast majority of packets should not generate
any alarms at all and it is often the case that a single packet matches only
one rule. Imho, the fact that Snort 2.0 changed this is clearly an indication
More information about the Snort-users