[Snort-users] Snort/Barnyard2 performance with remote DB
beenph at ...11827...
Wed Feb 29 20:47:44 EST 2012
If you guys want to talk about "barnyard2" it would be nice to
also include barnyard2-users ML or move the conversation there.
If your not already on it, jump in.
Also i have concatenated both of your e-mails (mike and jason) for consistency.
> A factor of 10 doesn't make a meaningful difference for me.
Factor of 10 is conservative, i have observed better number but
those number will depends on alot of factor and 10% insertion performance
is nothing negligable using the old schema no matter how you put it.
If you restrain from testing it, it will be defacto in the next release anyways.
I was just mentionning it.
> For local
> DB's with lan latency, barnyard2 is already plenty fast for my use. For
> remote DB's with 200ms of latency to be feasible I'd need to see a
> factor of 100 improvement (remember we're starting from ~1 alert/sec for
> a link with over 200ms latency). I'm pretty sure this problem isn't
> solvable without batching multiple alerts per tcp round-trip, or
> employing dozens of insert threads to get parallelism.
Batch insertion is something already tought and will probably be available
in the next release. For design/consistency reasons it will not be used
with the new schema.
> My solution was to move the database local to the barnyard2 instance and
> use a more latency tolerant protocol to push the events back to a
> central system. In my case, an Arcsight connector is doing that work
> and it employs both batching and multiple transport threads to achieve
> latency tolerance.
Well if you feel confortable having distributed db system and you think you have
no issue synchronizing betwen all your deploiment then its fine but personally
its only part of the problem. Also having a DB on the system that uses
snort is kind
of a performance hog in its self and personaly i would rather deal
with latency than
having potentialy dropped packets.
> I really like barnyard2 as a tool. I use it extensively and in almost
> all of my deployments it introduces effectively zero overhead and isn't
> even close to being a bottleneck. It is highly sensitive to latency,
> though, and in a few deployments I've had to engineer around that
One of the issue is the ways the plugin was coded and it was the
number of of time it was querying the DB
for each event.
Now signature lookup is almost 0 Unless the signature is not in cache,
thus there is no overhead there,
and this can prevent at betwen 5 and 10 database operation in worst cases.
Also 1 event = 1 transaction (for all touched tables) a non
put all those together, multiply them by the number of even you can
insert you get performance.
But this was all done trying to keep consistency with the old schema.
> This is a great topic, as lately I've been thinking about centralizing
> our world-wide SQL databases and this issue with latency will kill us.
> How about this as a feature request? Get snort to rotate unified output
> files after either a time or size threshold (like daemonlogger does),
....snort does rotate file with a size threshold. Daemonlogger actually uses
the "snort way".
> and then use rsync to move those closed files to a central server, where
> barnyard can then move them into the DB? Certainly not realtime anymore
> - but if you are talking about centralizing high-latency separated
> sensors into a single DB, I think we can safely say realtime isn't a
> primary motivator anymore... Tricks with dnotify/etc could minimize the
> delay too.
Using this method you would then induce delay on size. Unless your rsync
at every 200k or something and still there, what happen when you have a low
noise sensor. (your rsync delay, file size delay) ....
> Actually, this could all be treated as a barnyard feature request?i.e. a
> new output option for barnyard - unique filenames that another process
> (rsync loop) manages. This would have the advantage that the local
> barnyard could still do the realtime syslog alerting - it would just be
> the DB entries that would lag...?
Using the non transformed data or the transformed data you would still have
to deal with a high latency network, then reprocess the information.
As i said before the REAL issue with the "old" plugin was the incredible amount
of time it was quering the DB for 1 event, this dramatically reduced
kind of fix the problem of
using it over a high latency network, unless you use barnyard2 in
combinaison with a special
snort ruleset that would generate 2mb of data every second and you try
to force that data
arround the world over a 128k/s link, then you might have other issue.
> Hmmm, barnyard2 already has a tcpdump output option - could all this be
> done with existing code? i.e. the "leaf node" barnyard2 does the syslog
> and tcpdump output, we rsync the tcpdump files to the central server,
> *somehow* turn them back into unified2 format and the central barnyard2
> pushes them in (with the original sensor names of course).
The tcpdump output only dump packets from events with packets, thus
its quite useless
if you want to have "event" information and packets, but if your only
interested in "packets"
then it can be an option.
More information about the Snort-users