[Snort-users] Deployment Sizes? was: anyone trying kickfire to improve SQL performance?

moses at ...14297... moses at ...14297...
Mon May 5 10:09:34 EDT 2008

Although CPU is important; disk I/O may be more important. I would  
probably say that even if you could find a good balance of processors  
and memory. Disk I/O will cause you to loose packets more than  
processor or memory issues. Here is my suggestion

Break up your snort processing power into multiple devices. For  
several reasons

#1 never under estimate the idea of distrubuted processing. Having  
multiple collectors on different pieces of hardware will allow you to  
overcome the limitations of

  - stack and os issues with not being able to process so much traffic
  - issues with you PCI or pcix stac not able to keep up. 
  - issues with your I/o not able to keep up

#2 hiarchacal architecture of your collector (snort) to aggregator  
(log) to database will help you not only collect data but also help in  
parsing it

#3 consider a network rearchitecture where you group devices by os and  
application to apply signatures and policies in an organized fashion:

if you have several linux apache wevservers maybe consider not  
placing them on the same subnet as mysql servers so that you can apply  
linux signatures and apache signatures ( and http preprocessors) to  
that collector ( collector "a" ) and Linux and mysql signatures to  
( collector "b".)

You may find that several cheap collectors may yield better more  
accurate results and possibly resilience rather than a single large  

Moses Hernandez

On May 4, 2008, at 10:02 AM, Jason <security at ...5028...> wrote:

> A very reasonable approach. Packet loss can cause a number of issues  
> but
> low percentages shouldn't be too problematic.
> Using affinity shelters one instance from issues with the other, make
> sure you handle interrupts properly too.
> Things to look for with packet loss:
> - Packets that seems to have mixed protocol content, if you get into
> higher percentage loss you may have to use zero_flushed_packets
> otherwise you can get buffer remnants as artifacts because of  
> reassembly
> having gaps in the data stream
> - cascading loss, packet loss can cause subsystems to work harder
> creating a cascading loss
> - Errant alerts. Caused by content being left over as detailed above.
> - state mismatch, loss in the setup of a session could cause mid-state
> sessions to be incorrectly identified.
> Your goal should be 0 loss but it is practical to accept that some  
> loss
> may happen during bursts and peak hours, your security posture is
> generally minimally affected because an attacker cannot reliably  
> predict
> when the loss will occur and if it will be their packet.
> SQL having its own resources is good, keep in mind that doing it
> directly from the engine is a blocking operation. Any DB work that  
> takes
> longer then the time between packets will result in loss because the
> engine cannot process the packet. Decouple output from input. EG: use
> unified output and barnyard etc.
> Memory is more important than processor in many cases, make sure you
> allocate enough memory to the snort processes to handle everything.  
> Many
> times packet loss can be resolved simply by allocating more memory.
> Stewart L wrote:
>> I figured we'd add until we start dropping too many packets.   The  
>> CPU load
>> on each core is only about 45% right now and we're dropping less  
>> than 1% of
>> packets through the box.  We're also doing some processor affinity  
>> stuff and
>> dedicating a couple cores to SQL and each instance of snort gets  
>> it's own
>> core as well.
>> I'd be interested in hearing from other folks doing large setups...
>> Stewart
>> On Sat, May 3, 2008 at 5:13 PM, Jason Haar  
>> <Jason.Haar at ...294...> wrote:
>>> Stewart L wrote:
>>>> Well, I wasn't in charge of the deployment. I handed it off to  
>>>> one of
>>>> the guys on my team to do the research and recommendations.
>>>> Part of the problem is that there is no SOLID advice out there on  
>>>> how
>>>> to set up and tweak a lot of this stuff.  We have the oreilly books
>>>> and have done some searches, but there is a lot of hand waving  
>>>> and not
>>>> a lot of solid answers.
>>> There are too many variables for there to be a "one size fits all"
>>> answer. That's why companies like SourceFire exist - they do all  
>>> that
>>> background 'thinking' for you and produce a product that 'just  
>>> works'.
>>> You should check the solution you have actually works. 6-16 100Mbs
>>> Ethernet monitors on one box is probably too many. Unless you've
>>> cherry-picked the motherboard,Ethernet cards, etc. And I'm assuming
>>> they're 100M - if they are Gb - you almost certainly have a problem.
>>>> So, you're saying that if I were to have another machine do the  
>>>> actual
>>>> capture and a separate database machine, I'd be better off in the  
>>>> long
>>>> haul?  That should be pretty easy to set up.
>>> Yup - you won't get all the hard SQL work interfering with the hard
>>> packet sniffing work. And barnyard of course instead of native SQL
>>> support.
>>> --
>>> Cheers
>>> Jason Haar
>>> Information Security Manager, Trimble Navigation Ltd.
>>> Phone: +64 3 9635 377 Fax: +64 3 9635 417
>>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

More information about the Snort-users mailing list