[Snort-users] Lowmem issue

James Lay jlay at ...13475...
Tue Feb 14 13:28:30 EST 2017


On 2017-02-14 09:48, Michael Altizer wrote:
> It looks a lot like your remaining free RAM is too fragmented to
> allocate the required contiguous memory for the AFPacket ring buffer,
> which can happen when a system has been up and active for a while doing
> other things and is exacerbated by having a relatively low amount of
> free RAM (you're over a gig into swap and have around 6% "free"
> including disk caching).  You could try shutting down other processes
> and hope that they release convenient memory for it, otherwise you may
> have to reboot.  The default request for AFPacket is 128MB unless 
> you've
> changed it via the DAQ variable and it will try to back down its block
> size (minimal contiguous block requirement) all the way down to the 
> page
> size, so it's doing everything that it can. You can see the backing off
> process at work by passing Snort --daq-var debug.

Very helpful...thank you Michael.

James

> 
> On 02/13/2017 10:09 AM, James Lay wrote:
>> More information...anything?  Cisco?
>> 
>> 15:05:49 box kernel: [1632941.016354] snort: page allocation failure:
>> order:4, mode:0x10c0d0
>> 15:05:49 box kernel: [1632941.016362] CPU: 3 PID: 6187 Comm: snort
>> Tainted: G           OX 3.13.0-107-generic #154-Ubuntu
>> 15:05:49 box kernel: [1632941.016364] Hardware name:
>> 15:05:49 box kernel: [1632941.016366]  0000000000000000 
>> ffff8800017c3b50
>> ffffffff8172d229 000000000010c0d0
>> 15:05:49 box kernel: [1632941.016371]  0000000000000000 
>> ffff8800017c3bd8
>> ffffffff81158fbb ffff88032fff2e38
>> 15:05:49 box kernel: [1632941.016374]  ffff8800017c3b78 
>> ffffffff8115ba66
>> ffff8800017c3ba8 0000000000000286
>> 15:05:49 box kernel: [1632941.016377] Call Trace:
>> 15:05:49 box kernel: [1632941.016387]  [<ffffffff8172d229>]
>> dump_stack+0x64/0x82
>> 15:05:49 box kernel: [1632941.016391]  [<ffffffff81158fbb>]
>> warn_alloc_failed+0xeb/0x140
>> 15:05:49 box kernel: [1632941.016395]  [<ffffffff8115ba66>] ?
>> drain_local_pages+0x16/0x20
>> 15:05:49 box kernel: [1632941.016398]  [<ffffffff8115d740>]
>> __alloc_pages_nodemask+0x980/0xb90
>> 15:05:49 box kernel: [1632941.016403]  [<ffffffff8119bf93>]
>> alloc_pages_current+0xa3/0x160
>> 15:05:49 box kernel: [1632941.016405]  [<ffffffff81157f8e>]
>> __get_free_pages+0xe/0x50
>> 15:05:49 box kernel: [1632941.016409]  [<ffffffff8117514e>]
>> kmalloc_order_trace+0x2e/0xc0
>> 15:05:49 box kernel: [1632941.016412]  [<ffffffff811a7197>]
>> __kmalloc+0x237/0x250
>> 15:05:49 box kernel: [1632941.016421]  [<ffffffff81735272>] ?
>> _raw_spin_lock_bh+0x12/0x50
>> 15:05:49 box kernel: [1632941.016425]  [<ffffffff8170905b>]
>> packet_set_ring+0x19b/0x7d0
>> 15:05:49 box kernel: [1632941.016428]  [<ffffffff81739444>] ?
>> __do_page_fault+0x204/0x560
>> 15:05:49 box kernel: [1632941.016431]  [<ffffffff817351eb>] ?
>> _raw_spin_unlock_bh+0x1b/0x40
>> 15:05:49 box kernel: [1632941.016434]  [<ffffffff81709c30>]
>> packet_setsockopt+0x2b0/0x970
>> 15:05:49 box kernel: [1632941.016439]  [<ffffffff81617391>]
>> SyS_setsockopt+0x71/0xd0
>> 15:05:49 box kernel: [1632941.016442]  [<ffffffff8173dddd>]
>> system_call_fastpath+0x1a/0x1f
>> 15:05:49 box kernel: [1632941.016443] Mem-Info:
>> 15:05:49 box kernel: [1632941.016445] Node 0 DMA per-cpu:
>> 15:05:49 box kernel: [1632941.016448] CPU    0: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016449] CPU    1: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016451] CPU    2: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016452] CPU    3: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016454] CPU    4: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016455] CPU    5: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016457] CPU    6: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016458] CPU    7: hi:    0, btch:   1 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016460] Node 0 DMA32 per-cpu:
>> 15:05:49 box kernel: [1632941.016462] CPU    0: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016463] CPU    1: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016465] CPU    2: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016467] CPU    3: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016468] CPU    4: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016470] CPU    5: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016471] CPU    6: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016473] CPU    7: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016474] Node 0 Normal per-cpu:
>> 15:05:49 box kernel: [1632941.016476] CPU    0: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016478] CPU    1: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016480] CPU    2: hi:  186, btch:  31 
>> usd:
>>    30
>> 15:05:49 box kernel: [1632941.016481] CPU    3: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016483] CPU    4: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016485] CPU    5: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016486] CPU    6: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016488] CPU    7: hi:  186, btch:  31 
>> usd:
>>     0
>> 15:05:49 box kernel: [1632941.016492] active_anon:1122485
>> inactive_anon:252220 isolated_anon:0
>> 15:05:49 box kernel: [1632941.016492]  active_file:698257
>> inactive_file:589130 isolated_file:0
>> 15:05:49 box kernel: [1632941.016492]  unevictable:44 dirty:1497
>> writeback:0 unstable:0
>> 15:05:49 box kernel: [1632941.016492]  free:232359
>> slab_reclaimable:63721 slab_unreclaimable:8157
>> 15:05:49 box kernel: [1632941.016492]  mapped:362183 shmem:309087
>> pagetables:11762 bounce:0
>> 15:05:49 box kernel: [1632941.016492]  free_cma:0
>> 15:05:49 box kernel: [1632941.016496] Node 0 DMA free:15876kB min:84kB
>> low:104kB high:124kB active_anon:0kB inactive_anon:0kB active_file:0kB
>> inactive_file:0kB unevictable:0kB isolated(anon):0kB 
>> isolated(file):0kB
>> present:15996kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB
>> mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:32kB
>> kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB
>> writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
>> 15:05:49 box kernel: [1632941.016502] lowmem_reserve[]: 0 3227 11993
>> 11993
>> 15:05:49 box kernel: [1632941.016505] Node 0 DMA32 free:775064kB
>> min:18160kB low:22700kB high:27240kB active_anon:863156kB
>> inactive_anon:311392kB active_file:846104kB inactive_file:410068kB
>> unevictable:0kB isolated(anon):0kB isolated(file):0kB 
>> present:3386688kB
>> managed:3307620kB mlocked:0kB dirty:28kB writeback:0kB mapped:339084kB
>> shmem:330544kB slab_reclaimable:71520kB slab_unreclaimable:5748kB
>> kernel_stack:376kB pagetables:8264kB unstable:0kB bounce:0kB
>> free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> 15:05:49 box kernel: [1632941.016510] lowmem_reserve[]: 0 0 8766 8766
>> 15:05:49 box kernel: [1632941.016513] Node 0 Normal free:138496kB
>> min:49332kB low:61664kB high:73996kB active_anon:3626784kB
>> inactive_anon:697488kB active_file:1946924kB inactive_file:1946452kB
>> unevictable:176kB isolated(anon):0kB isolated(file):0kB
>> present:9175040kB managed:8976776kB mlocked:176kB dirty:5960kB
>> writeback:0kB mapped:1109648kB shmem:905804kB 
>> slab_reclaimable:183364kB
>> slab_unreclaimable:26848kB kernel_stack:2112kB pagetables:38784kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0
>> all_unreclaimable? no
>> 15:05:49 box kernel: [1632941.016518] lowmem_reserve[]: 0 0 0 0
>> 15:05:49 box kernel: [1632941.016520] Node 0 DMA: 1*4kB (U) 0*8kB 
>> 0*16kB
>> 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 
>> 1*2048kB
>> (R) 3*4096kB (M) = 15876kB
>> 15:05:49 box kernel: [1632941.016532] Node 0 DMA32: 58082*4kB (UEM)
>> 50505*8kB (UEM) 8524*16kB (UEM) 52*32kB (UEMR) 5*64kB (MR) 2*128kB (R)
>> 1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 775760kB
>> 15:05:49 box kernel: [1632941.016543] Node 0 Normal: 27116*4kB (UEM)
>> 2766*8kB (UEM) 227*16kB (UEM) 10*32kB (UEM) 16*64kB (UM) 1*128kB (R)
>> 1*256kB (R) 1*512kB (R) 0*1024kB 1*2048kB (R) 0*4096kB = 138512kB
>> 15:05:49 box kernel: [1632941.016560] Node 0 hugepages_total=0
>> hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>> 15:05:49 box kernel: [1632941.016567] 1617719 total pagecache pages
>> 15:05:49 box kernel: [1632941.016569] 21317 pages in swap cache
>> 15:05:49 box kernel: [1632941.016571] Swap cache stats: add 7177607,
>> delete 7156290, find 2135020/2460367
>> 15:05:49 box kernel: [1632941.016572] Free swap  = 4014016kB
>> 15:05:49 box kernel: [1632941.016573] Total swap = 5361660kB
>> 15:05:49 box kernel: [1632941.016575] 3144431 pages RAM
>> 15:05:49 box kernel: [1632941.016576] 0 pages HighMem/MovableOnly
>> 15:05:49 box kernel: [1632941.016577] 49566 pages reserved
>> 
>> 
>> 
>> On 2017-02-06 08:51, James Lay wrote:
>>> Been seeing these as of late:
>>> 
>>> Feb  6 15:05:46 snort[21636]: FATAL ERROR: Can't start DAQ (-1) - 
>>> eth0:
>>> Couldn't allocate enough memory for the kernel packet ring!!
>>> 
>>> free -lm:
>>> 
>>>                total       used       free     shared    buffers
>>> cached
>>> Mem:         12012      11281        730       1207         38
>>> 5599
>>> Low:         12012      11281        730
>>> High:            0          0          0
>>> -/+ buffers/cache:       5642       6369
>>> Swap:         5235       1192       4043
>>> 
>>> 
>>> Not sure where to check...memorywise I'm running with:
>>> 
>>> config disable_decode_alerts
>>> config disable_tcpopt_experimental_alerts
>>> config disable_tcpopt_obsolete_alerts
>>> config disable_tcpopt_ttcp_alerts
>>> config disable_tcpopt_alerts
>>> config disable_ipopt_alerts
>>> config checksum_mode: all
>>> config pcre_match_limit: 3500
>>> config pcre_match_limit_recursion: 1500
>>> config detection: search-method ac-split search-optimize
>>> max-pattern-len
>>> 20
>>> config event_queue: max_queue 8 log 3 order_events content_length
>>> config paf_max: 16000
>>> 
>>> Any thoughts would be awesome...thank you.
>>> 
>>> James
>>> 
>>> ------------------------------------------------------------------------------
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>>> _______________________________________________
>>> Snort-users mailing list
>>> Snort-users at lists.sourceforge.net
>>> Go to this URL to change user options or unsubscribe:
>>> https://lists.sourceforge.net/lists/listinfo/snort-users
>>> Snort-users list archive:
>>> http://sourceforge.net/mailarchive/forum.php?forum_name=snort-users
>>> 
>>> Please visit http://blog.snort.org to stay current on all the latest
>>> Snort news!
>> ------------------------------------------------------------------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>> _______________________________________________
>> Snort-users mailing list
>> Snort-users at lists.sourceforge.net
>> Go to this URL to change user options or unsubscribe:
>> https://lists.sourceforge.net/lists/listinfo/snort-users
>> Snort-users list archive:
>> http://sourceforge.net/mailarchive/forum.php?forum_name=snort-users
>> 
>> Please visit http://blog.snort.org to stay current on all the latest 
>> Snort news!
> 
> 
> 
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> _______________________________________________
> Snort-users mailing list
> Snort-users at lists.sourceforge.net
> Go to this URL to change user options or unsubscribe:
> https://lists.sourceforge.net/lists/listinfo/snort-users
> Snort-users list archive:
> http://sourceforge.net/mailarchive/forum.php?forum_name=snort-users
> 
> Please visit http://blog.snort.org to stay current on all the latest 
> Snort news!




More information about the Snort-users mailing list