[Snort-sigs] Matching the beginning or end of a (preprocessor) content buffer

Joel Esler jesler at ...435...
Fri Nov 9 10:13:20 EST 2012

On Nov 9, 2012, at 9:55 AM, Mike Cox <mike.cox52 at ...2420...> wrote:

> So I can probably do some tests when I get the time (thanks for the
> responses BTW), but I'm somewhat concerned with the comment, "...it
> would be against static pcaps which doesn't test performance.  (Some
> people think that looping a pcap through a system a bunch of times
> test performance..)"
> Can you elaborate on this?

We've heard of people testing performance by taking a big pcap and looping it through their engine many times and thinking that's a "real world" performance test.  (Which in reality, it's a test of how fast your hard drive can be read ;)

> I understand that using the '-r' option to tell Snort to read a pcap
> will not test performance of things like bandwidth, dropped packets,
> etc.  However, in a case like this when you want to test *relative*
> performance between rules, is Performance Profiling not accurate for
> thing like avg_ticks, total_ticks, etc.?  Does the engine not load the
> rules, build the matching data structures/logic, and process thing the
> same way when the '-r' option is used?  Let me say again that I am
> asking about relative performance numbers between rules, not absolute
> numbers necessarily.

Yeah…. ehh….

So..  Here's the deal.  If you are testing a rule against a pcap that you know is going to fire, you are going to get a performance number.  That performance number is relative to that pcap (No matter how big your pcap is).  You can do some tweaking to a rule to get better performance against that pcap, but there is no accounting for how the rule will actually work in the real world.

I'll give you a completely awful example, but I am hoping you will look past the example and not debate me on the merits of this example ;)  (Not you Mike, but someone else on the list might feel like being pedantic or argumentative and do so)

content:"User-Agent|3a 20|"; content:"badstuff"; 

You run this against any static pcap, and you will get "x" number.  Then you can change the rule to read:

content:"User-Agent|3a 20|": content:"badstuff|0d 0a|"; 

You'll get a better performance number and you'll get "y" number, which is better than "x" and think "well I improved the performance of the rule"  And you did.  Against that pcap.  However, in the real world, your fast pattern match is "User-Agent|3a 20|" which will match on almost every http session there is.

We test against pcaps all day.  Constantly.  Just about every rule we have in the VRT ruleset has a pcap and exploit associated with it.  But it's no match for the real thing.

TL;DR -- You can test all you want against pcaps, at the end of the day, it's meaningless.  Real World traffic mix is where it's at.  You want big packets, small packets, complex packets, simple packets, etc.  

Joel Esler
Senior Research Engineer, VRT
OpenSource Community Manager

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.snort.org/pipermail/snort-sigs/attachments/20121109/3ddcef35/attachment.html>

More information about the Snort-sigs mailing list