[Snort-users] Snort + PF_RING + DAQ

beenph beenph at ...11827...
Tue Sep 4 18:14:18 EDT 2012


On Tue, Sep 4, 2012 at 5:30 PM, livio Ricciulli <livio at ...15149...> wrote:
>
> The Intel ixgbe(10Gb) driver comes with a script called
> set_irq_affinity which I use to set the card IRQs to the CPUs - in
> /proc/interrupts it looks like a descending staircase pattern.
>
> good.
>
> The most recent PF_RING DAQ has a parameter to specifically bind
> Snort/DAQ instances to CPU ids so I'm using that in a similar loop to
> the one used to start Snort on the Metaflows site.
>

I was personally refering to network driver queues. You can set those
at driver level.


> The site says:
>
> for i in `seq 0 1 23`; do
> snort -c snort.serv.conf -N -A none -i eth3 --daq-dir /usr/local/lib/daq \
> --daq pfring --daq-var clusterid=10 &
> done
>
> I do not think binding CPU is a good idea..Notice that the IXGBE has 16
> queues but
> we spawn 24 threads with no binding..That was the best performance on our
> hardware.
>

CPU Binding is something important, QUEUE wise if you bind a snort
process to the same network QUEUE
then you can clearly start to benchmark. If you spread the network
queue load on multiple CPU and do not bind process
to the same CPU then your adding context switching in the mix which i
think is bad at high throuput.


> IIRC you should have as many snort thread as network QUEUE your card
> have, and you should balance
> your IRQ on CPU and not CORE, thus if you have 16 dual core cpu, then you
> chould bind 2 cpu (4 core) to each snort process.
>
> I do not know how got your network card driver but mabey you would
> like to compile it from source.
>
> Ref: http://www.intel.com/support/network/adapter/pro100/sb/cs-032530.htm
>
> Pfring uses it's own ixgbe driver..

Still has original driver feature, mainly DMA functions are patched
iirc, so its still tunnable.

Is the PF_RING drivers up to date? Seem's like its a few version
behind, could it have impact...mabey ntop people know.

>
> Also you have alot of tunning depending on how your setup so you can
> tune your driver to your needs.
>
> -elz
>
> On our hardware, we had a slight gain by using hyperthreading using
> 24 snort processes on a dual X5670 (6 cores+hyperthreading) rather
> than 12 snort processes like you suggest. Also, as I said, in our tests,
> letting the CPU roam wild was the best..
>
> But it is hard to generalize..
>


Having 6 physical core (12 if their dual)  and  16 queue, i would set
2  network QUEUE per cpu (not core)   (and spread the 4 other queue
over all cores)

Now this will depend on the network activity but i still strongly
think you shouldn't spread workload on CPU threads, enable
hyperthreading shouldn't do any good
except if you follow the same logic and still use CPU logic thus
binding 4 CPU thead ( 2 thread per core 2 core) to a snort instance
and a snort queue.


-elz




More information about the Snort-users mailing list