[Snort-users] Snort/Barnyard2 performance with remote DB
beenph at ...11827...
Thu Mar 1 20:54:01 EST 2012
On Thu, Mar 1, 2012 at 11:44 AM, Mike Lococo <mikelococo at ...11827...> wrote:
> On 02/29/2012 08:47 PM, beenph wrote:
>> Batch insertion is something already tought and will probably be
>> available in the next release. For design/consistency reasons it will
>> not be used with the new schema.
> That's great news, and will likely completely clear up the latency
>>> My solution was to move the database local to the barnyard2
>>> instance and use a more latency tolerant protocol to push the
>>> events back to a central system. In my case, an Arcsight connector
>>> is doing that work and it employs both batching and multiple
>>> transport threads to achieve latency tolerance.
>> Well if you feel confortable having distributed db system and you
>> think you have no issue synchronizing betwen all your deploiment then
>> its fine but personally its only part of the problem. Also having a
>> DB on the system that uses snort is kind of a performance hog in its
>> self and personaly i would rather deal with latency than having
>> potentialy dropped packets.
> I'm aware of general best-practice advice in sensor design, but I
> monitor my snort stack extensively and use that data to drive my design
> decisions. I'm confident that it's working as expected.
Well the point i was trying to address is that you will run into issue
if your final backend
has the same schema as the one that is located on your sensor. So you
to also transform the data midway betwen your central reader and your
"sensor" based database.
And if you have really busy sensor writing to disk, a heavy barnyard2
process writing to database
and a database receiving and processing its information and
dispatching request to the reader
to feed a "central" database there is other issue than "snort" and
barnyard2 that could affect your system,
regardless of the latency, and i wouldn't encourage this to solve a
barnyard2 < 2.1-9 "latency" issue.
> An overview of my monitoring setup is at:
> Below are theoretical maximum alert rates for a system that inserts 1 alert
> per tcp round trip. If I understand barnyard2's architecture, a single
> instance of barnyard2 cannot exceed these maximums without the addition of
> code to handle batching or parallel insert threads:
> 80-100 ms -> 10-12.5 alert/sec - Typical latency for US east-cost to
> US west-coast or to western-europe
> 200 ms -> 5 alert/sec - Typical latency for US east-coast to
> asia or australia
> 300+ ms -> 3 or fewer alert/sec - Typical latencies to locations with
> poor-quality internet infrastructure
> These theoretical numbers are 5-10 times larger than what I measured in my
> last round of real-world tests, which were performed before the most recent
> round of optimizations. I believe they already include the best-case
> improvement for the optimizations that are about to be released, but I
> haven't tested the newly optimized code. I'd love to see someone else's
> numbers, but won't have time to investigate myself until batching lands,
> which is when I expect there might be enough improvement to facilitate wan
> DB's for my use-case.
I wouldn't want people to be mislead by number that where based on a plugin that
has been arround for a while without any change to even reduce the
insane amount of query it was
doing to the database for one event.
Allowing a batch buffer to be sent is not a big thing to implement
and it will be present with the new schema.
The issue is to create such "buffering"/retention mechanism for all
plugin to synchronize consistency.
But i will still put emphasis on the fact that the re-written plugin
will boost performance of remote and in your case local insertion
and it shouldn't be "ignored" if your serious about "performance".
More information about the Snort-users