[Snort-users] Performance again
mkettler at ...4108...
Tue Dec 23 09:42:00 EST 2003
At 11:55 AM 12/23/2003, Edin Dizdarevic wrote:
>AFAIK there are two buffers: store and hold, at least according to Mr.
>Stevens. This may not aply to Linux. Anyway, if we use Phil Wood's
>libpcap it would be possible to virtually extend the buffer size. So
>with that countermeasure we give Snort more time to finish the tasks
>pending. Correct so far?
Correct... although the extended version is still a fixed limit on the
amount of data that can be buffered.. Phil's version why I referred to it
as a "fixed" limit, instead of just saying 2 packets. You can reconfigure
the limit, but there's still a fixed limit to the amount of data you can store.
Increasing the buffer size (using Phil's libpcap) does get you increased
time to process an individual packet before drop occurs. The only thing to
bear in mind is that this only extends your worst-case. Your average rate
of processing still needs to be able to keep up with the inflow of data, or
you'll over-run eventually no matter how big the queue is.
>But if we go a step further, there are also some Snort parameters which
>influence the amount of the time Snort has for the individual tasks
>themselves. If I give the preprocessors more of the machine's (endless)
>memory I may remove the bottleneck there. On the other side the libpcap
>"wants" some memory too and the system itself and so on. Sure, "Throw
>memory and/or money on it"-approach will almost always solve the
>problems one may have, but in this particular case I would prefer choosing
>another one ;) .
Fair enough.. I can tell you from experience with 2.0.x that the
spp_portscan2 and spp_conversation preprocessors are by FAR the heaviest
resource users in all of snort 2.0.x... snort 2.1.x has replaced it with a
flows based system.
Going back to your original post, you asked about "biggest factors" in
>1. Many open sessions
>2. Big packets
>3. System load
>5. Alert Count
None of the above is really the "biggest factor", although all are
Certainly much more of a factor than the number of sessions or the size of
packets is going to be what RTNs (rule tree nodes) the packets need to be
Snort does a "first pass" elimination of rules to run against a packet
based on the sourceIP, dest IP, protocol, source port and dest port
specified in the rules. This "first pass" is very much faster than the
A packet that winds up matching a combination of RTNs that has a long
list of rules with lots of content searches is going to be a heavy hit in
processing time. On the other hand a packet which winds up matching none of
the combinations is going to have very little processing time.
Thus really the type of packets matters quite a lot, and which packet types
are 'bad' depends a lot on how your ruleset is constructed.. if every
packet to a http server has to be evaluated against 800 content rules, that
hurts. If HTTP_SERVERS is set to "any" and EXTERNAL_NET is set to "any"
that hurts even more.
>I am simply trying to understand how is everything working together as one
Understood, and I'm just trying to explain the parts I understand. Your
post had the conception that when a new packet arrived, the one that was
being processed got dropped in the middle of some part of snort processing
it.. it doesn't. The actual mechanism of drop winds up dropping the oldest
packet in the queue that snort hasn't started processing yet.
>The only information
>source I have at the moment is the performance monitor.
More information about the Snort-users