[Snort-users] Re: Announce: FLoP-1.0 --- Fast Logging Project for snort (fwd)

Dirk Geschke Dirk_Geschke at ...1344...
Tue Dec 2 05:33:03 EST 2003

Hi Bamm,

> I've learned to expect the unexpected. For me, many of my sensors 
> are remote, and I don't control any of the networks. So, if we take
> our 'central' network down for maintenance, my remote sensors wouldn't 
> be able to connect to the central servsock until it's back up. Usually
> this is no more than two to four hours, but every once in a while it 
> can go down for longer. Yes, we write alerts to disk after they use X 
> amount of mem (set in a conf file). They are written one alert per file
> and then read back in order once the backend becomes available again. 
> There are numerous ways to accomplish this and I like the extra reliability
> it provides. I don't think it's right to compare it to barnyard since we 
> are only writing to disk when we can't send the alerts up. I'd rather take
> a small hit in my snort performance rather than losing alerts altogether.

the idea of FLoP is to have a fast logging. Therefore I assumed a
dedicated network for the communication between the remote sensors
and the central server. So in addition there neither encryption of
the data nor is there any validation of the sensor (aka sockserv)
connecting to the central server. (On stealth sensors this is just
a need to have a second network. But indeed I think this network
should be resevered for the NIDS.)

Further if you don't get too much alerts then sockserv should still
be able to work (except for power shortage). The problem is if you
get too many alerts. But then I see the problem of re-inserting 
the stored files in the same way. When will you do this if the 
actual attack rate is high. Do you then want to store the actual
alerts on the disk and first insert the old ones?

> > Hmm, this is a little bit more complicated. I don't know how I can
> > identify problems with the database connection within servsock. But
> > the good thing is: Good databases will never die ;-)
> We can dream can't we ;)

Oh yes, at least sometimes... But indeed I never had problems with
abnormal dieing of databases. (A full disk is an abnormal situation.)

> > In fact databases will rarely die and my project assumes that the
> > database is running on the same host as servsock. Otherwise we can't
> > feed the database via an unix socket... So a reboot is no issue. DB
> > cleaning could be: but this can be done online. Then maybe servsock 
> > will work a little bit slower...
> Depends on the database.  We use postgres and while a 'normal' vacuum 
> is good for routine maintenance, once you postgres DB gets huge (tens 
> of millions of rows) you probably need to run 'VACUUM FULL' which will
> totally lock the DB until its done (were talking hours here). I'd be 
> interested to see how servsock respondes to a locked DB. Generally, I 
> restart the DB and only allow local connections when I need to do a 
> 'VACUUM FULL', this causes the remote agents to start buffering their
>  events locally (our version of servsock isn't run on the DB machine).

Hmm, I fear servsock will run into trouble if the database id off.
Before tearing down the database you can stop servsock so no new
alerts are accepted, the old ones are stored and then you can wait
for the database to come online again. Then restart servsock. If
the parameters for sockserv are chosen appropiate all should go
on as before. That's at least a possibility.

Stopping the database during servsock is running will simply drop
the alerts to nirwana. (Ok, you will get an error message from
the client library wich is printed to stdout or syslog which 
contains the failed statement.) But it is no good idea at all.

But how many sensors are you running to get such amount of
data even if the connection to the sensors go off for a few
hours inbetween?

I think a database should be cleare by some automatically process
to spool off old or less important data. Who really cares about
alerts of the last month? Yes, you should think of making a
backup but I think it is not really required to be part of a vital 


> > Which database design are you using? I think it is the same one as
> > of snort/ACID? Did you ever think about the possibility to store the
> > whole pcap data in the database? This should make it unnecessary to
> > store them on disk on a separate way.
> No, sguil does not use the standard snort/acid schema. There have
> been many discussions about this, but basically the standard schema
> isn't as scalable as I and others need/want it to be. I can send you 
> a diagram (once I create it) otherwise you can just take a peek at 
> the create_sguildb.sql script in the source. Pcap data for each alert 
> is stored in the DB. The pcap that is stored on the sensors is for
> entire streams (think 'log ip any any -> any any'). It's hard enought 
> to store that data locally, I can't imagine trying to push that data 
> up to a DB (let alone dealing with it once it's there). Sguil uses 
> barnyard and the op_sguil plugin for recieving RT events and INSERTing
> them into the DB right now. Are you interested in discussing 
> (off this list) an option for FLoP to use the sguil schema and to send 
> alerts directly to sguild?

I just took a short look at create_sguildb.sql. At first glance it 
does not look very different in contrast to the ACID database but
contains some fields which are not included in the FLoP "design".

Is it possible that the tables contain a lot of redundancy? This
seems not to be very fast...

But I have to look at in a silent moment and read the documentation
of sguil too. But this will take some time which I don't have in the

Best regards


More information about the Snort-users mailing list