[Snort-users] large numbers of ICMP messages and log analysis

James Hoagland hoagland at ...47...
Mon Aug 21 12:18:39 EDT 2000


At 11:46 AM +0100 8/21/00, Tom Whipp wrote:
>Hi all,
>
>	over the weekend I managed to crash my snort host during automated log
>processing (using snort-snarf).  It turns out that somebody had been sending
>a steady stream of ICMP echos to my primary nameserver (logs indicate around
>350,000) over a 24 hour period.
>
>When snort-snarf came to run there was over 200MB or data to process, which
>exhausted the physical and virtual memory of the host.  I suspect that
>snort-snarf uses a seriously non-linear amount of memory (I haven't checked
>yet but the correlations in the output would seem to suggest this) which
>doesn't matter for most routine logs but that's another story.

snortsnarf.pl does use alot of memory when running.  Not superlinear 
though, AFAIK.  As currently designed, it reads all the alerts in all 
the input files into memory, parsing them as it goes.  It groups 
these into different sets (e.g., all the alerts coming from a certain 
IP).  Pages are then generated from these.

We've looked into lowering its memory consumption, but nothing seemed 
promising and easy.  So we just decided to let virtual memory handle 
it.  We welcome suggestions though.

In any case, not sure why it would crash your host, unless it 
normally crashes when memory is exhausted.


>In the meantime all I can think of to do is to ignore all ICMP traffic -
which doesn't make me very happy.

Another option is to filter your SnortSnarf input logs and look 
though the ICMP echo traffic manually (but it sounds like you already 
know what is there).  At the bottom of this message I have inlined a 
script I use to do filtering of this sort.

>Would it make sense for snort to adopt a syslog approach to log entries (and
>print messages such as previous packet repeated 10,000 times?) for instances
>such as this it could save quite a lot of hassle (and provided that the
>payloads where compared no data would be lost).  I haven't looked at the
>architecture yet but could this be done by a pre-processor (I suspect not as
>it would need to know the last alert generated) - could this effectively be
>an addition to the logging modules (perhaps one that supports loading of
>another logging module).

It is difficult to do this effectively with out losing information. 
At a minimum the times of the alerts are likely different.  So, if 
implemented, this behavior should be optional.  Aside from that, I 
have no objections as long as it is easy to parse.  Just my 2c.

Regards,

   Jim

#!/usr/bin/perl

# grep_para.pl, by Jim Hoagland (hoagland at ...47...)
# use at your own risk

# print out only those paragraphs in the input that match given perl
#  patterns (each is put directly into an m// which is run on the
#  paragraph).
# paragraphs are assumed to be separated by one or more blank lines
# if -v is given at the start of the command line, the tense of the
#  search is reversed and all the patterns are required to not be
#  present in the paragraph for the paragraph to print
# the input file is the last argument on the command line; output is
#  to STDOUT
# the perl patterns are the remaining arguments

$rev=0;
@pats= @ARGV[0..$#ARGV-1];
while ($pats[0] =~ /^\-/) {
         $opt= shift(@pats);
         if ($opt =~ /^-v/) {
                 $rev=1;
         }
}
@ARGV= ($ARGV[$#ARGV]);
my $text= '';
LINE: while (<>) {
         $text.= $_;
         if (/^\s*$/) {
                 $t= $text;
                 $text= '';
                 foreach $pat (@pats) {
                         unless ($rev) {
                                 next LINE unless $t =~ /$pat/;
                         } else {
                                 next LINE if $t =~ /$pat/;
                         }
                 }
                 print $t;
         }
}
-- 
|*   Jim Hoagland, Associate Researcher, Silicon Defense    *|
|*               hoagland at ...47...                *|
|*  Voice: (707) 445-4355 x13          Fax: (707) 826-7571  *|




More information about the Snort-users mailing list