[Snort-sigs] Matching the beginning or end of a (preprocessor) content buffer

Mike Cox mike.cox52 at ...2420...
Fri Nov 9 13:10:04 EST 2012


In this case, we were always talking about rule metrics (at least I
was), and in particular, relative rule metrics (specifically,
comparing multiple rules on the same network data), although rule
metrics and engine performance metrics are *not* mutually
exclusive....

Again, data is data and the way the engine process it should be the
same whether it is a big or small pcap, and it shouldn't matter if it
Snort is reading a pcap file or consuming data straight off a network
interface, right?  Or am I wrong (I've asked this three times now).

I understand the value of real-world traffic in evaluating performance
but if I run a pcap thru Snort that has rules x and y loaded,
shouldn't I be able to see a relatively accurate performance
difference between rule x and rule y?

-Mike Cox

On Fri, Nov 9, 2012 at 11:38 AM, Joel Esler <jesler at ...435...> wrote:
> I suppose if your pcap was big enough, sure. For rule metrics. But not for engine performance.
>
> Two different things.
>
> Sent from my iPhone
>
> On Nov 9, 2012, at 12:23 PM, Mike Cox <mike.cox52 at ...2420...> wrote:
>
>> On Fri, Nov 9, 2012 at 9:13 AM, Joel Esler <jesler at ...435...> wrote:
>>> On Nov 9, 2012, at 9:55 AM, Mike Cox <mike.cox52 at ...2420...> wrote:
>>>
>>> So I can probably do some tests when I get the time (thanks for the
>>> responses BTW), but I'm somewhat concerned with the comment, "...it
>>> would be against static pcaps which doesn't test performance.  (Some
>>> people think that looping a pcap through a system a bunch of times
>>> test performance..)"
>>>
>>> Can you elaborate on this?
>>>
>>>
>>> We've heard of people testing performance by taking a big pcap and looping
>>> it through their engine many times and thinking that's a "real world"
>>> performance test.  (Which in reality, it's a test of how fast your hard
>>> drive can be read ;)
>>>
>>> I understand that using the '-r' option to tell Snort to read a pcap
>>> will not test performance of things like bandwidth, dropped packets,
>>> etc.  However, in a case like this when you want to test *relative*
>>> performance between rules, is Performance Profiling not accurate for
>>> thing like avg_ticks, total_ticks, etc.?  Does the engine not load the
>>> rules, build the matching data structures/logic, and process thing the
>>> same way when the '-r' option is used?  Let me say again that I am
>>> asking about relative performance numbers between rules, not absolute
>>> numbers necessarily.
>>>
>>>
>>> Yeah…. ehh….
>>>
>>> So..  Here's the deal.  If you are testing a rule against a pcap that you
>>> know is going to fire, you are going to get a performance number.  That
>>> performance number is relative to that pcap (No matter how big your pcap
>>> is).  You can do some tweaking to a rule to get better performance against
>>> that pcap, but there is no accounting for how the rule will actually work in
>>> the real world.
>>>
>>> I'll give you a completely awful example, but I am hoping you will look past
>>> the example and not debate me on the merits of this example ;)  (Not you
>>> Mike, but someone else on the list might feel like being pedantic or
>>> argumentative and do so)
>>>
>>> content:"User-Agent|3a 20|"; content:"badstuff";
>>>
>>> You run this against any static pcap, and you will get "x" number.  Then you
>>> can change the rule to read:
>>>
>>> content:"User-Agent|3a 20|": content:"badstuff|0d 0a|";
>>>
>>> You'll get a better performance number and you'll get "y" number, which is
>>> better than "x" and think "well I improved the performance of the rule"  And
>>> you did.  Against that pcap.  However, in the real world, your fast pattern
>>> match is "User-Agent|3a 20|" which will match on almost every http session
>>> there is.
>>
>> Performance can be measured in many ways and the Snort Performance
>> Profiling takes in to account many of these.
>> Sure, in these examples the fast-pattern matcher will default to the
>> longest string which is, "User-Agent: ".
>> So when you are looking at specific rule performance, I don't see how
>> a rule that has to match on two additional bytes can be more efficient
>> than one that doesn't (if all other things are equal).
>>
>>>
>>> We test against pcaps all day.  Constantly.  Just about every rule we have
>>> in the VRT ruleset has a pcap and exploit associated with it.  But it's no
>>> match for the real thing.
>>
>> Pcaps *are* the real thing.  Again, I'm only talking about relative
>> rule performance, not data speeds, etc.
>>
>>> TL;DR -- You can test all you want against pcaps, at the end of the day,
>>> it's meaningless.  Real World traffic mix is where it's at.  You want big
>>> packets, small packets, complex packets, simple packets, etc.
>>
>> This is confusing.  Network data is network data, no matter how it is
>> generated ... it still sounds like using the '-r' option to tell Snort
>> to read a pcap file is different from telling Snort to process data
>> over an interface ('-i').  Is this right?  Pcap files can contain,
>> "big packets, small packets, complex packets, simple packets, etc." so
>> I'm confused about the cognitive disparity here.
>>
>> To be clear, I'm not talking about relative performance metrics across
>> multiple pcaps in Snort using the '-r' option, I'm talking about
>> metrics generated by a single pcap (or a network feed from an
>> interface), which contains data to be evaluated by the engine which is
>> configured to use multiple rules, and those rules are the basis for
>> the relative performance evaluations.
>>
>> Thanks.
>>
>> -Mike Cox




More information about the Snort-sigs mailing list