Hi all,
When we only use one CPU core, the cmd "show trace max 5000" works well.
But it will crash when we use four CPU cores because of "out of memory".
Below are some information, any guides?


root@vBRAS:~# cat /proc/meminfo 
MemTotal:        4028788 kB
MemFree:          585636 kB
MemAvailable:     949116 kB
Buffers:           22696 kB
Cached:           592600 kB
SwapCached:            0 kB
Active:          1773520 kB
Inactive:         118616 kB
Active(anon):    1295912 kB
Inactive(anon):    45640 kB
Active(file):     477608 kB
Inactive(file):    72976 kB
Unevictable:        3656 kB
Mlocked:            3656 kB
SwapTotal:        976380 kB
SwapFree:         976380 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:       1280520 kB
Mapped:           112324 kB
Shmem:             62296 kB
Slab:              84456 kB
SReclaimable:      35976 kB
SUnreclaim:        48480 kB
KernelStack:        5968 kB
PageTables:       267268 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     2466484 kB
Committed_AS:   5368769328 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:    348160 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:     512
HugePages_Free:      384
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       96064 kB
DirectMap2M:     3049472 kB
DirectMap1G:     3145728 kB




0: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
0: vlib_pci_bind_to_uio: Skipping PCI device 0000:02:0e.0 as host interface 
ens46 is up
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
[New Thread 0x7b0019efa700 (LWP 5207)]
[New Thread 0x7b00196f9700 (LWP 5208)]
[New Thread 0x7b0018ef8700 (LWP 5209)]
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:07.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:08.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:09.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:0a.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:0b.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:0c.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:0d.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:0e.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
DPDK physical memory layout:
Segment 0: phys:0x7d400000, len:2097152, virt:0x7b0015000000, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x7d800000, len:266338304, virt:0x7affe4600000, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
[New Thread 0x7b00186f7700 (LWP 5210)]
/usr/bin/vpp[5202]: dpdk_ipsec_process:241: DPDK Cryptodev support is disabled, 
default to OpenSSL IPsec
/usr/bin/vpp[5202]: dpdk_lib_init:1084: 16384 mbufs allocated but total rx/tx 
ring size is 18432
/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /vpe-api: cleanup 
ghost pid 4719
/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /global_vm: 
cleanup ghost pid 4719
Thread 1 "vpp_main" received signal SIGABRT, Aborted.
0x00007fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) 
(gdb) 
(gdb) 
(gdb) p errno     /*there are only 81 opened fd belong to progress VPP*/
$1 = 9
(gdb) bt
#0  0x00007fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fffef65702a in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x000000000040724e in os_panic () at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/vnet/main.c:290
#3  0x00007fffefe6b49b in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=<optimized out>, align=4, 
size=18768606)               /*mmap*/
    at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=<optimized out>, 
length_increment=length_increment@entry=1, data_bytes=<optimized out>, 
header_bytes=<optimized out>, header_bytes@entry=0, 
data_align=data_align@entry=4)
    at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/vec.c:84
#5  0x0000000000420f04 in _vec_resize (data_align=<optimized out>, 
header_bytes=<optimized out>, data_bytes=<optimized out>, 
length_increment=<optimized out>, v=<optimized out>)
    at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/vec.h:142
#6  vl_api_cli_request_t_handler (mp=<optimized out>) at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/api/api.c:1132
#7  0x00007ffff7bce2e3 in vl_msg_api_handler_with_vm_node (am=0x7ffff7dd6160 
<api_main>, the_msg=0x30521e4c, vm=0x7ffff79aa2a0 <vlib_global_main>, 
node=<optimized out>)
    at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlibapi/api_shared.c:502
#8  0x00007ffff79b619f in memclnt_process (vm=<optimized out>, 
node=0x7fffaebbe000, f=<optimized out>) at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlibmemory/memory_vlib.c:543
#9  0x00007ffff7755fa6 in vlib_process_bootstrap (_a=<optimized out>) at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib







Reply via email to