That worked!! Thanks Florin!

-Matt


> On Jan 29, 2018, at 3:34 PM, Florin Coras <fcoras.li...@gmail.com> wrote:
> 
> Hi Matt, 
> 
> I’ll let Neale provide more details but I think that what you’re looking for 
> is to start vpp with ip {heap-size <value>}
> 
> Cheers,
> Florin
> 
>> On Jan 29, 2018, at 1:24 PM, Matthew Smith <mgsm...@netgate.com> wrote:
>> 
>> 
>> Hi,
>> 
>> I’ve been trying to insert a large set of routes into VPP’s FIB. VPP keeps 
>> exiting after adding a very specific amount of them (64278). If I attach to 
>> the vpp process with gdb and start adding the routes, after a few seconds an 
>> ABRT is received. The backtrace (see below) shows that os_out_of_memory() 
>> was called as a result of trying to resize a vector containing FIB data 
>> structures.
>> 
>> I was loading the routes originally using a BGP daemon that has been 
>> modified to communicate with the VPP API. When I had problems with that, I 
>> stopped using the BGP daemon and tried adding the same routes using both 
>> vppctl and vpp_api_test. They all ended up at the same result.
>> 
>> The procedure I have been using to load routes is:
>> 
>> systemctl start vpp
>> vppctl set int ip addr TenGigabitEthernet2/0/0 198.51.100.1/24
>> vppctl set int state TenGigabitEthernet2/0/0 up
>> vppctl exec <file_with_routes>
>> 
>> An example of the commands I used to add routes with vppctl is:
>> 
>> ip route add 10.20.30.0/24 table 0 via 198.51.100.2 TenGigabitEthernet2/0/0
>> 
>> All routes are added to the same table and all have the same next-hop 
>> address and interface. Only the destination prefix varies. I noticed that if 
>> I omit the command to set an IP address on the interface, that a larger 
>> number of routes - about 103k - can be processed before VPP dies. The system 
>> I am testing on is an Atom C2758 with 16 GB of RAM. Just to make sure it 
>> wasn’t an issue specific to that system, I repeated the test on a Xeon 
>> D-1541 with 32 GB of RAM also and saw the exact same behavior (died after 
>> adding 64278 routes). Neither system has anything else running on it.
>> 
>> Is there any memory tuning required in order to add a large number of routes 
>> to a FIB? Or is this a bug? I tried increasing the hugepages memory (to 4 GB 
>> plus increased vm.max_map_count and kernel.shmmax) and the DPDK socket-mem 
>> (to 1 GB) because those were the memory parameters that I knew of that could 
>> be tuned. Does anyone know what I’m doing wrong or have any ideas for where 
>> I should look?
>> 
>> Thanks!
>> -Matt
>> 
>> 
>> Backtrace:
>> 
>> (gdb) bt
>> #0  0x00007feece2761f7 in __GI_raise (sig=sig@entry=6) at raise.c:56
>> #1  0x00007feece2778e8 in __GI_abort () at abort.c:90
>> #2  0x0000000000405fa3 in os_panic ()
>> #3  0x00007feecef8c7b2 in clib_mem_alloc_aligned_at_offset 
>> (os_out_of_memory_on_failure=1, align_offset=<optimized out>, align=64, 
>> size=13676394) at mem.h:105
>> #4  vec_resize_allocate_memory (v=v@entry=0x7fee50be3480, 
>> length_increment=length_increment@entry=1, data_bytes=<optimized out>, 
>> header_bytes=<optimized out>, header_bytes@entry=48, 
>> data_align=data_align@entry=64) at vec.c:84
>> #5  0x00007feecfbfa0cf in _vec_resize (data_align=64, header_bytes=48, 
>> data_bytes=<optimized out>, length_increment=1, v=<optimized out>) at 
>> vec.h:142
>> #6  ply_create (init_leaf=init_leaf@entry=3, leaf_prefix_len=16, 
>> ply_base_len=ply_base_len@entry=16, m=<optimized out>) at ip4_mtrie.c:183
>> #7  0x00007feecfbfa6f7 in set_root_leaf (a=0x7fee8f7fa880, m=0x7fee4fa70600) 
>> at ip4_mtrie.c:495
>> #8  ip4_fib_mtrie_route_add (m=0x7fee4fa70600, 
>> dst_address=dst_address@entry=0x7fee8e640668, dst_address_length=<optimized 
>> out>, adj_index=<optimized out>) at ip4_mtrie.c:646
>> #9  0x00007feecff356a8 in ip4_fib_table_fwding_dpo_update (fib=<optimized 
>> out>, addr=addr@entry=0x7fee8e640668, len=<optimized out>, dpo=<optimized 
>> out>) at ip4_fib.c:386
>> #10 0x00007feecff3a554 in fib_table_fwding_dpo_update (fib_index=<optimized 
>> out>, prefix=prefix@entry=0x7fee8e640658, dpo=dpo@entry=0x7fee8e640674) at 
>> fib_table.c:252
>> #11 0x00007feecff45f98 in fib_entry_src_action_install 
>> (fib_entry=0x7fee8e64064c, source=<optimized out>) at fib_entry_src.c:606
>> #12 0x00007feecff43a60 in fib_entry_create (fib_index=fib_index@entry=0, 
>> prefix=prefix@entry=0x7fee8f7faa80, source=source@entry=FIB_SOURCE_CLI, 
>> flags=flags@entry=FIB_ENTRY_FLAG_NONE, paths=paths@entry=0x7fee8f6e0e48) at 
>> fib_entry.c:704
>> #13 0x00007feecff3ac40 in fib_table_entry_path_add2 
>> (fib_index=fib_index@entry=0, prefix=prefix@entry=0x7fee8f7faa80, 
>> source=source@entry=FIB_SOURCE_CLI, flags=flags@entry=FIB_ENTRY_FLAG_NONE, 
>> rpath=rpath@entry=0x7fee8f6e0e48) at fib_table.c:568
>> #14 0x00007feecfc56f5a in vnet_ip_route_cmd (vm=0x7feed05522c0 
>> <vlib_global_main>, main_input=<optimized out>, cmd=<optimized out>) at 
>> lookup.c:523
>> #15 0x00007feed02f0ba1 in vlib_cli_dispatch_sub_commands 
>> (vm=vm@entry=0x7feed05522c0 <vlib_global_main>, cm=cm@entry=0x7feed05524a0 
>> <vlib_global_main+480>, input=input@entry=0x7fee8f7facb0, 
>> parent_command_index=<optimized out>) at cli.c:588
>> #16 0x00007feed02f0f57 in vlib_cli_dispatch_sub_commands 
>> (vm=vm@entry=0x7feed05522c0 <vlib_global_main>, cm=cm@entry=0x7feed05524a0 
>> <vlib_global_main+480>, input=input@entry=0x7fee8f7facb0, 
>> parent_command_index=parent_command_index@entry=0) at cli.c:566
>> #17 0x00007feed02f1190 in vlib_cli_input (vm=vm@entry=0x7feed05522c0 
>> <vlib_global_main>, input=input@entry=0x7fee8f7facb0, 
>> function=function@entry=0x0, function_arg=function_arg@entry=0) at cli.c:662
>> #18 0x00007feed032bc79 in unix_cli_exec (vm=0x7feed05522c0 
>> <vlib_global_main>, input=<optimized out>, cmd=<optimized out>) at cli.c:3002
>> #19 0x00007feed02f0ba1 in vlib_cli_dispatch_sub_commands 
>> (vm=vm@entry=0x7feed05522c0 <vlib_global_main>, cm=cm@entry=0x7feed05524a0 
>> <vlib_global_main+480>, input=input@entry=0x7fee8f7faed0, 
>> parent_command_index=parent_command_index@entry=0) at cli.c:588
>> #20 0x00007feed02f1190 in vlib_cli_input (vm=0x7feed05522c0 
>> <vlib_global_main>, input=input@entry=0x7fee8f7faed0, 
>> function=function@entry=0x7feed0332030 <unix_vlib_cli_output>, 
>> function_arg=function_arg@entry=0) at cli.c:662
>> #21 0x00007feed033319b in unix_cli_process_input (cli_file_index=0, 
>> cm=0x7feed0552160 <unix_cli_main>) at cli.c:2308
>> #22 0x00007feed0336c46 in unix_cli_process (vm=0x7feed05522c0 
>> <vlib_global_main>, rt=0x7fee8f7ea000, f=<optimized out>) at cli.c:2420
>> #23 0x00007feed02fe856 in vlib_process_bootstrap (_a=<optimized out>) at 
>> main.c:1231
>> #24 0x00007feecef57918 in clib_calljmp () at longjmp.S:110
>> #25 0x00007fee9022bc20 in ?? ()
>> #26 0x00007feed02ffb99 in vlib_process_startup (f=0x0, p=0x7fee8f7ea000, 
>> vm=0x7feed05522c0 <vlib_global_main>) at main.c:1253
>> #27 dispatch_process (vm=0x7feed05522c0 <vlib_global_main>, 
>> p=0x7fee8f7ea000, last_time_stamp=0, f=0x0) at main.c:1296
>> #28 0x0000000000000001 in ?? ()
>> #29 0x0000000000000000 in ?? ()
>> (gdb) 
>> 
>> _______________________________________________
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
> 

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to