[Snort-devel] Re: [Snort-users] Announce: FLoP-1.0 --- Fast Logging Project for snort

Jeff Nathan jeff at ...835...
Mon Dec 1 01:57:03 EST 2003

Hash: SHA1


Please understand (because I haven't been very clear up until now), I'm 
not speaking specifically about FLoP in these messages.  I'm speaking 
conceptually.  I think your work is very compelling and hope to 
thoroughly investigate it this week.

- -Jeff

On Dec 1, 2003, at 4:48 AM, Dirk Geschke wrote:

> Hi Jeff,
>> Yes, you're right.  If the process reading off the domain socket is
>> terminated or crashes (read: bites the dust), then you've lost your
>> logging information.  Log files are more permanent in that sense... of
>> course there are other solutions for this as well (possibly persistent
>> IPC?  I don't know if this is portable though).
> as outlined in the answer to Bamm's mail it is very unlikely that
> the process will die. There are some reason when this may happen
> but even snort may die which is maybe worser. But there are some
> possible solutions which could be applied.
> At the moment there is only one situation where the sockserv
> process (the process running on the sensor and listening on
> the socket) will die: If the process on the central server
> is not running the process tries it (adjustable) up to 10
> times (with a delay of 10 seconds, also adjustable) to reach
> the server process. If this fails then the process will simply
> stop. The values are set quite small. But if you want to test
> them it is better to have small values...
> If this happens you have a serious problem on your central
> server. Therefore we stop working to give an admin a hint.
> But this values can be set on very high values. Maybe an
> option for an infinite loop should be added?
> Yes, log files are more permanent until the disk/partion is filled
> up. And what happens then?
> The big advantage of the FLoP solution is that you indeed may
> loose the alerts but no process fails due to breaking the chain.
> You can restart the dead process and all works again without
> problems.
> With a full disk you may get some more problems until all is
> working again...
>> Files were chosen because they're persistent, but there is of course a
>> cost associated with them.  The argument is sometimes made that a
>> memory filesystem can be used if you need more speed out of spool
>> files, but a memory filesystem is no more permanent than missing
>> messages sent to the domain socket.  The best sense of permanency in
>> this situation is to move your spooled file out of the memory
>> filesystem (if you need it for permanency).  But, now we're getting
>> into the minutia of all this stuff.
> With FLoP I thought of a distributed system with several sensors
> reporting to one central server. If you store the log files locally
> you have to check all sensors for these files.
> Yes, you can temporarily spool the data on the filesystem and then
> insert them later on in the database. But this will have some side
> effects like disordering the alerts in the time they were generated.
>> I don't know what the best option is right now.  The only point at
>> which you need persistence is generating the initial spool file.  So,
>> perhaps barnyard ought to provide a domain socket to allow for the
>> decoupling of output rather than require people integrate their code
>> within it (register their output plugin in the same way you would with
>> Snort, etc..).
> If we skip the problems with the filesystem then the main problem
> with barnyard is in my eyes the output plugin for the database.
> This is the old behaviour of snort.
> The advantage of barnyard is like in FLoP that the output is decoupled
> from the snort process. But if you want to store the data in a database
> (and where else should they be kept?) then they are stored directly via
> TCP/IP over the network to the central database. (Note: I think still 
> of
> distributed sensor and one central database. But I think the database
> should never run on a sensor in a critical environment.)
> You have to feed several tables of the database which would result in
> a high amount of traffic for each alerting packet. (Maybe you have to
> fill up at least the tables: event, sensor, iphdr, tcpdhr, data. And
> all values have to be send via TCP/IP.) With my approach you are 
> sending
> all alert informations with two TCP packets to the central machine.
> Here the servsock process feeds them to the database via an unix 
> socket.
> This is much faster than via the (real) network.
> And again: servsock works like sockserv, one thread simply receives the
> alerts and stores them in memory (so it is unlikely that this blocks)
> and the second thread feeds this data to the database. If the database
> is hanging (due to, maybe, indexing) or there are two many sensors
> alerting at the same time the alerts are buffered.
> And if even this results in problems there is still the possibility
> to drop alerts if too many are stored in memory. These dropped alerts
> can be send (only the short form, no payload) to a list of recipients
> via e-mail.
> You see, there are already some aspects I took care of...
> Best regards
> Dirk

- --
http://cerberus.sourcefire.com/~jeff       (gpg/pgp key id 6923D3FD)
"Common sense is the collection of prejudices acquired by age
eighteen."   - Albert Einstein

Version: GnuPG v1.2.3 (Darwin)


More information about the Snort-devel mailing list