[Snort-devel] Segfault, massive memory usage in 1.8.1beta3 build 47

Seth Leger soleger at ...511...
Fri Jul 20 14:21:05 EDT 2001

I got this segfault after running for about 2 hours under constant 
Nessus strain:


#0  0x80744fb in Frag2CompareFunc (ItemPtr=0xbfffebf0, 
NodePtr=0x13873950) at spp_frag2.c:154
#1  0x80700c6 in qFind (cmp=0x80744f0 <Frag2CompareFunc>, 
FindMe=0xbfffebf0, p=0x13577b30)
   at ubi_BinTree.c:231
#2  0x80705ba in ubi_btFind (RootPtr=0x80998e4, FindMe=0xbfffebf0) at 
#3  0x807099c in ubi_sptFind (RootPtr=0x80998e4, FindMe=0xbfffebf0) at 
#4  0x8074bcd in GetFragTracker (p=0xbfffecb0) at spp_frag2.c:474
#5  0x8074a78 in Frag2Defrag (p=0xbfffecb0) at spp_frag2.c:410
#6  0x8054f32 in Preprocess (p=0xbfffecb0) at rules.c:3427
#7  0x804a80b in ProcessPacket (user=0x0, pkthdr=0x80d6378, 
pkt=0x80d6478 "") at snort.c:512
#8  0x8074ee3 in RebuildFrag (ft=0x13577b30, p=0xbffff1d0) at 
#9  0x8074b08 in Frag2Defrag (p=0xbffff1d0) at spp_frag2.c:433
#10 0x8054f32 in Preprocess (p=0xbffff1d0) at rules.c:3427
#11 0x804a80b in ProcessPacket (user=0x0, pkthdr=0xbffff690, 
pkt=0x8312c62 "") at snort.c:512
#12 0x807762c in pcap_read ()
#13 0x8077c1b in pcap_loop ()
#14 0x804bbb0 in InterfaceThread (arg=0x0) at snort.c:1441
#15 0x804a6db in main (argc=1, argv=0xbffff834) at snort.c:445
#16 0x40077f31 in __libc_start_main (main=0x804a07c <main>, argc=1, 
   init=0x80497b8 <_init>, fini=0x807ea1c <_fini>, rtld_fini=0x4000e274 
<_dl_fini>, stack_end=0xbffff82c)
   at ../sysdeps/generic/libc-start.c:129
(gdb) list spp_frag2.c:154
149         FragTracker *iFt;
151         nFt = (FragTracker *) NodePtr;
152         iFt = (FragTracker *) ItemPtr;
154         DebugMessage(DEBUG_FLOW,"NodePtr: sip: 0x%X  dip: 0x%X  ip: 
0x%X  "
155                      "proto: 0x%X\n", nFt->sip, nFt->dip, nFt->id, 
156         DebugMessage(DEBUG_FLOW,"ItemPtr: sip: 0x%X  dip: 0x%X  ip: 
0x%X  "
157                      "proto: 0x%X\n", iFt->sip, iFt->dip, iFt->id, 
(gdb) print nFt
$1 = (FragTracker *) 0x13873950
(gdb) print nFt->sip
Cannot access memory at address 0x13873960
(gdb) print nFt->dip
Cannot access memory at address 0x13873964
(gdb) print nFt->id
Cannot access memory at address 0x13873968
(gdb) print nFt->protocol
Cannot access memory at address 0x1387396a


Another note: I've noticed incredible memory usage for Snort over the 
last couple of runs. During the run that eventually ended in the seg 
fault above, after approximately 2 hours Snort was using upwards of 145 
MB of RAM.

Historically, I've noticed it leveling off at about 45 MB with my 
configuration (all current preprocessors on and XML logging enabled). In 
a run that I ran over night last night with the beta2 code, Snort grew 
to 445 MB of RAM, exhausting the machine's resources. Here's the info 
from that scan:

Snort analyzed 27638371 out of 27638371 packets, dropping 0(0.000%) packets

Breakdown by protocol:                Action Stats:
   TCP: 26398288   (95.513%)         ALERTS: 82490
   UDP: 753654     (2.727%)          LOGGED: 80793
  ICMP: 56867      (0.206%)          PASSED: 0
   ARP: 49962      (0.181%)
  IPv6: 0          (0.000%)
   IPX: 0          (0.000%)
  OTHER: 10333      (0.037%)
DISCARD: 0          (0.000%)
Fragmentation Stats:
Fragmented IP Packets: 1007496    (3.645%)
  Rebuilt IP Packets: 134481
  Frag elements used: 0
Discarded(incomplete): 0
  Discarded(timeout): 119106
TCP Stream Reassembly Stats:
  TCP Packets Used:      26398247   (95.513%)
  Reconstructed Packets: 18073      (0.065%)
  Streams Reconstructed: 13416940
Snort received signal 2, exiting

These alerts were achieved by intermittent Nessus scans (while I was at 
work) and then a constant stream of scripted nmap scans of the sensor 
overnight, along with pretty much constant output produced by the 
portscan plugin (since I had an NFS server outside of the exclude list). 
Incidentally, the portmap.log produced by the portscan plugin reached 
800 MB overnight.

I have tested the system before with similar tests and have never seen 
the RAM climb as high as it did yesterday. I'm not sure which part of 
the process is introducing this load (stream4, frag2, or the XML 
plugin), but if anyone has some advice on how to reduce it some by 
changing the plugin settings, I'd really appreciate it.

Another note: the RAM usage seems to be proportional to the output of 
the XML plugin, not the size of the portmap log. Therefore, I assume 
that the problem lies in a method that is executed during each alert. 
Here's a copy of my config for interested onlookers:

var HOME_NET any

#preprocessor defrag
preprocessor frag2
#preprocessor stream2: timeout 10, ports 21 23 80 110 143, maxbytes 16384
preprocessor stream4
preprocessor stream4_reassemble
#preprocessor http_decode: 80 -unicode -cginull
preprocessor unidecode: 80
preprocessor rpc_decode: 111
preprocessor bo: -nobrute
preprocessor telnet_decode
preprocessor portscan: $HOME_NET 4 3 portscan.log
#preprocessor portscan-ignorehosts: $DNS_SERVERS

# Flat file logging:
output xml: alert, file=/var/log/snort-xml

include classification.config

include master.rules

Thanks for the updates, it's definitely looking more stable than 1.8 
was! Great work,

soleger at ...511...

More information about the Snort-devel mailing list