[Snort-users] Snort/Barnyard2 performance with remote DB
mikelococo at ...11827...
Thu Mar 1 11:44:31 EST 2012
On 02/29/2012 08:47 PM, beenph wrote:
> Batch insertion is something already tought and will probably be
> available in the next release. For design/consistency reasons it will
> not be used with the new schema.
That's great news, and will likely completely clear up the latency
>> My solution was to move the database local to the barnyard2
>> instance and use a more latency tolerant protocol to push the
>> events back to a central system. In my case, an Arcsight connector
>> is doing that work and it employs both batching and multiple
>> transport threads to achieve latency tolerance.
> Well if you feel confortable having distributed db system and you
> think you have no issue synchronizing betwen all your deploiment then
> its fine but personally its only part of the problem. Also having a
> DB on the system that uses snort is kind of a performance hog in its
> self and personaly i would rather deal with latency than having
> potentialy dropped packets.
I'm aware of general best-practice advice in sensor design, but I
monitor my snort stack extensively and use that data to drive my design
decisions. I'm confident that it's working as expected.
An overview of my monitoring setup is at:
> As i said before the REAL issue with the "old" plugin was the
> incredible amount of time it was quering the DB for 1 event, this
> dramatically reduced kind of fix the problem of using it over a high
> latency network, unless you use barnyard2 in combinaison with a
> special snort ruleset that would generate 2mb of data every second
> and you try to force that data arround the world over a 128k/s link,
> then you might have other issue.
Below are theoretical maximum alert rates for a system that inserts 1
alert per tcp round trip. If I understand barnyard2's architecture, a
single instance of barnyard2 cannot exceed these maximums without the
addition of code to handle batching or parallel insert threads:
80-100 ms -> 10-12.5 alert/sec - Typical latency for US east-cost to
US west-coast or to western-europe
200 ms -> 5 alert/sec - Typical latency for US east-coast to
asia or australia
300+ ms -> 3 or fewer alert/sec - Typical latencies to locations with
poor-quality internet infrastructure
These theoretical numbers are 5-10 times larger than what I measured in
my last round of real-world tests, which were performed before the most
recent round of optimizations. I believe they already include the
best-case improvement for the optimizations that are about to be
released, but I haven't tested the newly optimized code. I'd love to
see someone else's numbers, but won't have time to investigate myself
until batching lands, which is when I expect there might be enough
improvement to facilitate wan DB's for my use-case.
More information about the Snort-users