[Snort-users] libpcap vs. ethernet drivers

Alex Stephens acs at ...1936...
Sun Apr 29 21:12:00 EDT 2001


Hi,

I've been running some tests to understand the failure modes of the
libpcap routines that Snort uses to grab copies of network traffic.
I've encountered peculiar results, so I thought I'd tap into the
groups collective experience.

Does anyone have information regarding the limitations of
libpcap-based utilities vs. ethernet drivers.  I.e., which is more
likely to fail when dealing with (1) fast ethernet line speeds (100
Mbs) and (2) small packet sizes (say, less than 200 bytes) -- which
necessarily implies a large number of packets per second.

What I'd really be interested in seeing, is a documented experiment
that demonstrates the limitations of libpcap or ethX drivers (then
maybe I could believe/interpret my experimentation -- for the curious
I've described the experiments below).

I'd would really appreciate it if anyone has solid emprical data on
the realistic limitations of libpcap-based packet grabbers or the 
network devices they read from.

THanks much,
-Alex


For the curious, my current setup is as follows:

I'm using 2 HP LPrs (both running Linux -- initially with Red Hat's
stock 2.2.16 kernel and then with a compiled 2.4.3 kernel) as a
network source and sink.  Each machine has eepro100's attached to a
Cisco 2924XL.  ttcp is used to generate traffic.

A third machine (another HP LPr) serves as a network monitor and its
interface is attached to a port in SPAN mode on the 2924.  The port of
the traffic "sink" is copied to this SPAN port.

A perl script is run on the monitor which does the following:
	1) SNMP poll the 2924XL for initial port statistics
	2) Launch tcpdump/Snort on the monitor
	3) Launch ttcp -- in UDP mode -- on the "source" to the 
	   discard port on the "sink"
	4) close out the tcpdump/Snort session
	5) SNMP poll the 2924XL for final port statistics

Then the number of packets seen on the switch is compared with the
number of packets observed by tcpdump/Snort.

When using the 2.2.16 kernel, the percent observed by tcpdump/Snort
begins to fall below 90% at the ~500 byte payload level.  With the
2.4.3 kernel, this happens for packets with payloads less than 200
Bytes (or ~50,000 pps).

What's even stranger is that the runtime for the ttcp process suddenly
drops to a bizarrely small number at 300 byte paloads and smaller.
This would imply, at least to me, that the ethernet driver on the
network "source" has failed somehow.  The fact that the byte level at
which this discontinuity occurs is different between the 2.2.16 kernel
and 2.4.3 kernel seems to uphold this assertion.  But I don't
understand why this might happen, any ideas?






More information about the Snort-users mailing list