[Snort-users] Pushing raw tcpdump data into database is extremely slow

Andrew R. Baker andrewb at ...950...
Wed Nov 21 10:14:03 EST 2001


Thomas Novin wrote:
> 
> Hi all.
> 
> At first I tried to log our network traffic directly into a MySQL database
> but found that snort dropped ~ 75% of the packets. Instead I used tcpdump
> to log to a file, push the file over to the mysql server and then, using
> snort -r, inserting the data into the database.
> 
> The problem is, over a ~ 5 minute period the tcpdump logfile had grown to
> be approx 50 MB of size and had 770k lines. I gave up with the snort -r
> after letting it run for 25 minutes. Snort had then inserted 330k lines
> into the database. I think you can all see the problem here, there is no
> way the database will keep up with my traffic.
> 
> The database server is a quite powerful machine, dual PIII 933 MHz, 1 GB
> RAM, Seagate U160 SCSI. I see however that the CPU load is no more than ~
> 20% (varies between 0 and 50) and there was still 350 MB mem left. When i
> logged directly to the database the machine used CPU 1 100% and CPU2 ~ 15%
> and all of the memory.

AFAIK, no SQL database will be fast enough to keep up with insertions of
network traffic in real time.  Even Oracle will fall behind.  You could
try using an embedded database for inserting the data.  The real
question here is *why* are you trying to store all of this information
in a database.  If you want to be able to search for packets based on
certain fields, I would suggest a hybrid approach where you create
tables in the database that only contain the searchable fields and have
them reference to the appropriate pcap file.  Then you use tcpdump to
extract the packets as you need them.  To make this work better, you
should segment the pcap files either by time or by size.

-A




More information about the Snort-users mailing list