[Snort-users] Ways to optimize throughput

Austad, Jay austad at ...432...
Tue Nov 14 14:44:38 EST 2000


> So I was wondering if something like that could be done to 
> Snort to allow it
> to process packets faster. If I have a rule that looks for
> "filename=\"hello.there\"" in a SMTP packet, how about 
> getting Snort just to
> look for SMTP packets (which is easier), then look for the exact match
> afterwards? Would that actually speed things up?

I'm not exactly sure how this works, but I was reading something on Cisco's
CBAC last night, and to speed up checking of packets against ACL's, it turns
the ACL's into a hash and compares the packets against that.  Supposedly,
this increases performance drastically instead of comparing each packet to
each ACL.  Could something like this be done with snort?

For the time being, you could also use 2 separate snort boxes, with half the
rules on each one to cut the load down.  I have alot of rules, and I plan on
putting a sniffer on a link that does about 40Mbit sustained throughout the
day.  I'll probably have to get a couple of boxes to split the load up.  My
PIII 733 gets it's CPU pegged at around 15Mbit of traffic.  But I guess it's
also running the database that it logs to.

Jay


> -----Original Message-----
> From: Jason Haar [mailto:Jason.Haar at ...294...]
> Sent: Monday, November 13, 2000 7:18 PM
> To: snort-users at lists.sourceforge.net
> Subject: [Snort-users] Ways to optimize throughput
> 
> 
> snort-users at lists.sourceforge.net
> Mime-Version: 1.0
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline
> User-Agent: Mutt/1.2.5i
> Organization: Trimble Navigation New Zealand Ltd.
> 
> [I may be trying to teach the developers to suck eggs here, 
> but here's a few
> random neuron-firings I've had today]
> 
> I've seen several postings over the past few months from the 
> developers
> where they refer to lost events due to having "too many" 
> rules for snort to
> process for the traffic level in question. Personally with 
> the sorts of
> traffic levels I'm looking at - this won't be a problem - but 
> I'm wondering
> if there is anything else that we could do to lower the 
> "dropped event"
> count.
> 
> For instance, this throughput problem is an issue with other 
> protocols like
> Web proxing too. I use "jesred"
> (http://www.linofee.org/~elkner/webtools/jesred/) which is a 
> redirector for
> the Squid Proxy server. This package manages to "optimize" 
> it's database of
> URLs it tries to match against by doing a double-pass - 
> whereby it first
> matches grossly, and then gets it's final match on that - 
> this speeds it up
> no end apparently.
> 
> So I was wondering if something like that could be done to 
> Snort to allow it
> to process packets faster. If I have a rule that looks for
> "filename=\"hello.there\"" in a SMTP packet, how about 
> getting Snort just to
> look for SMTP packets (which is easier), then look for the exact match
> afterwards? Would that actually speed things up?
> 
> Also, if the actual amount of packets hitting the Snort 
> ethernet card was
> reduced, I'm sure that would lower the risk of packet loss. 
> So how about if
> we were to write a script that parses the rules file, and 
> generates a IP ACL
> list from it (e.g. ipchains for Linux). Then by 
> blocking/dropping all other
> packets (as they are of no interest to us), and only allowing packets
> through that Snort is looking for, that should leave more 
> resource for Snort
> to play with?
> 
> Comments? [no flames, I've just had lunch]
> 
> 
> 
> -- 
> Cheers
> 
> Jason Haar
> 
> Unix/Special Projects, Trimble NZ
> Phone: +64 3 9635 377 Fax: +64 3 9635 417
> _______________________________________________
> Snort-users mailing list
> Snort-users at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/snort-users
> 



More information about the Snort-users mailing list