[Snort-users] Performance again

Edin Dizdarevic Edin.Dizdarevic at ...7509...
Tue Dec 23 11:10:00 EST 2003


Matt Kettler wrote:
> At 11:55 AM 12/23/2003, Edin Dizdarevic wrote:
[...]
> 
> Correct... although the extended version is still a fixed limit on
> the amount of data that can be buffered.. Phil's version why I
> referred to it as a "fixed" limit, instead of just saying 2 packets.
> You can reconfigure the limit, but there's still a fixed limit to the
> amount of data you can store.

Sure, ressources are endless... ;)

> 
> Increasing the buffer size (using Phil's libpcap) does get you
> increased time to process an individual packet before drop occurs.
> The only thing to bear in mind is that this only extends your
> worst-case. Your average rate of processing still needs to be able to
> keep up with the inflow of data, or you'll over-run eventually no
> matter how big the queue is.

Got that too...

> Fair enough.. I can tell you from experience with 2.0.x that the 
> spp_portscan2 and spp_conversation preprocessors are by FAR the
> heaviest resource users in all of snort 2.0.x... snort 2.1.x has
> replaced it with a flows based system.
> 
> Going back to your original post, you asked about "biggest factors"
> in snort load:
> 
>> 1. Many open sessions 
 >> 2. Big packets
 >> 3. System load
 >> 4. Internals
 >> 5. Alert Count
> 
> None of the above is really the "biggest factor", although all are 
> contributors.
> 
Alright,
> 
> Certainly much more of a factor than the number of sessions or the
> size of packets is going to be what RTNs (rule tree nodes) the
> packets need to be processed against.
> 
> Snort does a "first pass" elimination of rules to run against a
> packet based on the sourceIP, dest IP, protocol, source port and dest
> port specified in the rules. This "first pass" is very much faster
> than the content searching.
> 
> A packet that winds up matching a combination of RTNs that has a long
>  list of rules with lots of content searches is going to be a heavy
> hit in processing time. On the other hand a packet which winds up
> matching none of the combinations is going to have very little
> processing time.

Preselection is clever but:
[...]
> if every packet to a http server has to be evaluated against 800
> content rules, that hurts. If HTTP_SERVERS is set to "any" and
> EXTERNAL_NET is set to "any" that hurts even more.

Misconfiguration, tell me about it... :|

>> I am simply trying to understand how is everything working together
>> as one complex system.
> 
> 
> Understood, and I'm just trying to explain the parts I understand.

That's good... :)

> Your post had the conception that when a new packet arrived, the one
> that was being processed got dropped in the middle of some part of
> snort processing it.. it doesn't. The actual mechanism of drop winds
> up dropping the oldest packet in the queue that snort hasn't started
>  processing yet.

Another useful information. Snort will never drop a packet itself, it is
always the connection BPF or LSF respectively and libpcap where packets
are being dropped, simply due to the timeouts which the BPF device has
bound to its buffers (which again may be influenced by the corresponding
libpcap-app).

 From my point of view, I think I am a step further now.

But: If a packet is dropped from the queue that is needed for ex.
defragmentation or in order to reassemble the TCP-stream, either I
have to throw away the complete stream/packet or my content may feature
some holes...

I will probably come up with few new questions later on. Have to think
about it a bit now... ;)

Thanks so far and best regards,
Edin


> 
> 
>> The only information source I have at the moment is the performance
>> monitor.
> 
> 
> 


-- 
Edin Dizdarevic




More information about the Snort-users mailing list