[Snort-users] thoughts on load balancing snort boxen for high traffic links

Andrew R. Baker andrewb at ...1150...
Wed Mar 21 12:55:32 EST 2001


A number of people have mentioned using the TopLayer boxes.  If you are
on a budget,
you can simulate some of the functionality.  Install a hub to do the
sniffing from
(as long as you have < 100Mbps to sniff).  Connect the span port to the
hub and
hang as many snort boxes as you need off of the hub.  Set the bpf
options on 
the snort command line to split the traffic across all of the sensors.

-A

"Austad, Jay" wrote:
> 
> I originally sent this message to another list of people, but I think maybe
> it's a good thing to post it here also:
> 
> Ok, so I was thinking more on load balancing snort boxes for high traffic
> links, and here's one idea I had, let me know if this sounds like it may
> work:
> 
> Say I have one box that sits and runs the following command:
> tcpdump -i eth1 -<some_options> | ./splitter -b 10M -h
> 10.1.1.1:9999,10.1.1.2:9999,10.1.1.3:9999 &
> 
> Where the program "splitter" takes the tcpdump output as stdin, fills a
> buffer of size specified by the -b option, and then flushes the buffer
> (UDP?) to the first host listed in the -h option, the next fill/flush will
> go to the second host, and so on.
> 
> Each snort box has it's snort.conf set up to log to the same central
> database, has a named pipe (mkfifo /dev/snortpipe), and runs something like:
> 
> nc -l -p 9999 -u > /dev/snortpipe &
> snort -<some_options> -r /dev/snortpipe &
> 
> I couldn't get snort to take stdin, hence the creation of the named pipe.
> The splitter program will most likely have to have multiple threads running
> so that when one is flushing the buffer, the next one can be filling another
> one so there is no interruption in collection of data.  As my 3 snort boxes
> start running out of resources because of growing traffic, I can just add
> another.  Obviously, you're probably going to hose some of the fragment
> reassembly, but it shouldn't be too bad if your buffer size specified in the
> splitter program is large enough.
> 
> Unless snort gets more efficient or takes advantage of multiple procs, or
> until we have 4Ghz proccessors, I don't see how I'm going to sniff links
> that sustain any more than 20Mbit/sec worth of traffic.  Thoughts??
> 
> ----------
> Jay Austad
> Network Administrator
> CBS Marketwatch
> 612.817.1271
> austad at ...432... <mailto:austad at ...432...>
> http://cbs.marketwatch.com
> http://www.bigcharts.com
> 
> ----------
> Jay Austad
> Network Administrator
> CBS Marketwatch
> 612.817.1271
> austad at ...432... <mailto:austad at ...432...>
> http://cbs.marketwatch.com
> http://www.bigcharts.com
> 
> _______________________________________________
> Snort-users mailing list
> Snort-users at lists.sourceforge.net
> Go to this URL to change user options or unsubscribe:
> http://lists.sourceforge.net/lists/listinfo/snort-users
> Snort-users list archive:
> http://www.geocrawler.com/redir-sf.php3?list=snort-users




More information about the Snort-users mailing list