Hi Andreas,

From your description, I don't think you are doing anything wrong. You are 
probably breaking new ground on the number of interfaces and routes through 
those interfaces.

There is a per-interface adjacency DB, this is what is being allocated in the 
backtrace below. We size this for a mutli-access interface that potentially has 
many peers. I assume though your interface is P2P, you could therefore check 
this before allocating the adj DB and size it much smaller (since they'll be 
only one peer). Let me know how that goes.

Thanks,
neale


-----Message d'origine-----
De : "Dave Barach (dbarach)" <dbar...@cisco.com>
Date : vendredi 10 mai 2019 à 14:14
À : Andreas Schultz <andreas.schu...@travelping.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>, "Neale Ranns (nranns)" <nra...@cisco.com>
Objet : RE: [vpp-dev] finding a virtual memory leak in VPP

    Copying Neale. He may not respond immediately since he's on PTO until May 
13th.
    
    HTH... Dave
    
    -----Original Message-----
    From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Andreas Schultz
    Sent: Friday, May 10, 2019 5:54 AM
    To: vpp-dev@lists.fd.io
    Subject: Re: [vpp-dev] finding a virtual memory leak in VPP
    
    Am Do., 9. Mai 2019 um 19:59 Uhr schrieb Dave Barach (dbarach)
    <dbar...@cisco.com>:
    >
    > $ cat /proc/<pid>/maps
    >
    >
    >
    > You could also start vpp under gdb and set a breakpoint in mmap...
    
    uhm, that lead to something unexpected:
    
    Breakpoint 1, __GI___mmap64 (addr=0x0, len=33554432, prot=3, flags=34, 
fd=-1, offset=0) at ../sysdeps/unix/sysv/linux/mmap64.c:44
    44 in ../sysdeps/unix/sysv/linux/mmap64.c
    #0  __GI___mmap64 (addr=0x0, len=33554432, prot=3, flags=34, fd=-1,
    offset=0) at ../sysdeps/unix/sysv/linux/mmap64.c:44
    #1  0x00007ffff74c8cee in clib_mem_vm_alloc (size=33554432) at
    /usr/src/vpp/src/vppinfra/mem.h:317
    #2  0x00007ffff74ca431 in clib_bihash_init_24_8 (h=0x7fffb5ff4a80,
    name=0x7ffff7759638 "Adjacency Neighbour table", nbuckets=4096,
    memory_size=33554432) at
    /usr/src/vpp/src/vppinfra/bihash_template.c:55
    #3  0x00007ffff752c97a in adj_nbr_insert (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4, nh_addr=0x7ffff76d29e0 <zero_addr>, sw_if_index=7, 
adj_index=7) at /usr/src/vpp/src/vnet/adj/adj_nbr.c:68
    #4  0x00007ffff752ce6f in adj_nbr_alloc (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4, nh_addr=0x7ffff76d29e0 <zero_addr>,
    sw_if_index=7) at /usr/src/vpp/src/vnet/adj/adj_nbr.c:187
    #5  0x00007ffff752cf74 in adj_nbr_add_or_lock (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4,
    nh_addr=0x7ffff76d29e0 <zero_addr>, sw_if_index=7) at
    /usr/src/vpp/src/vnet/adj/adj_nbr.c:233
    #6  0x00007ffff7508d3e in fib_path_attached_next_hop_get_adj
    (path=0x7fffb5f15f34, link=VNET_LINK_IP4) at
    /usr/src/vpp/src/vnet/fib/fib_path.c:653
    #7  0x00007ffff7508dd6 in fib_path_attached_next_hop_set
    (path=0x7fffb5f15f34) at /usr/src/vpp/src/vnet/fib/fib_path.c:674
    #8  0x00007ffff750b6de in fib_path_resolve (path_index=51) at
    /usr/src/vpp/src/vnet/fib/fib_path.c:1883
    #9  0x00007ffff7504c4b in fib_path_list_resolve
    (path_list=0x7fffb5f14634) at
    /usr/src/vpp/src/vnet/fib/fib_path_list.c:578
    #10 0x00007ffff750515a in fib_path_list_create 
(flags=FIB_PATH_LIST_FLAG_SHARED, rpaths=0x7fffb5ff495c) at
    /usr/src/vpp/src/vnet/fib/fib_path_list.c:736
    #11 0x00007ffff74f94bc in fib_entry_src_api_path_swap (src=0x7fffb5ff49bc, 
entry=0x7fffb5ff0cb4, pl_flags=FIB_PATH_LIST_FLAG_NONE, rpaths=0x7fffb5ff495c) 
at
    /usr/src/vpp/src/vnet/fib/fib_entry_src_api.c:47
    #12 0x00007ffff74f5fd8 in fib_entry_src_action_path_swap 
(fib_entry=0x7fffb5ff0cb4, source=FIB_SOURCE_PLUGIN_HI, 
flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb5ff495c) at
    /usr/src/vpp/src/vnet/fib/fib_entry_src.c:1658
    #13 0x00007ffff74ea527 in fib_entry_create (fib_index=2, 
prefix=0x7fffb4bd9870, source=FIB_SOURCE_PLUGIN_HI, 
flags=FIB_ENTRY_FLAG_ATTACHED, paths=0x7fffb5ff495c) at
    /usr/src/vpp/src/vnet/fib/fib_entry.c:736
    #14 0x00007ffff74d4bfb in fib_table_entry_path_add2 (fib_index=2, 
prefix=0x7fffb4bd9870, source=FIB_SOURCE_PLUGIN_HI, 
flags=FIB_ENTRY_FLAG_ATTACHED, rpath=0x7fffb5ff495c) at
    /usr/src/vpp/src/vnet/fib/fib_table.c:576
    #15 0x00007ffff74d4aeb in fib_table_entry_path_add (fib_index=2, 
prefix=0x7fffb4bd9870, source=FIB_SOURCE_PLUGIN_HI, 
flags=FIB_ENTRY_FLAG_ATTACHED, next_hop_proto=DPO_PROTO_IP4, next_hop=0x0, 
next_hop_sw_if_index=7, next_hop_fib_index=4294967295, next_hop_weight=1, 
next_hop_labels=0x0,
    path_flags=FIB_ROUTE_PATH_FLAG_NONE) at
    /usr/src/vpp/src/vnet/fib/fib_table.c:548
    
    My plugin uses a interface per session (like other tunnel implementation 
do) and the adds a route per session pointing into that interface (the 
fib_table_entry_path_add).
    At the end this result in new pool instance for the adjacency. And a pool 
allocates some space by default. So, every session end up instantiating a new 
pool through the fib entry.
    
    It is no surprise that this doesn't scale beyond a few thousand session.
    
    I know can I can rewrite my tunnel code to use a mid chain adjacency.
    But is that the only choice or do I do something wrong with the interface 
and fib entry?
    
    How do the other tunnel implementations scale that use fib entries?
    
    Many thanks
    Andreas
    
    Andreas
    
    >
    >
    >
    > D.
    >
    >
    >
    >
    >
    >
    >
    > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Andreas
    > Schultz
    > Sent: Thursday, May 9, 2019 12:18 PM
    > To: vpp-dev@lists.fd.io
    > Subject: [vpp-dev] finding a virtual memory leak in VPP
    >
    >
    >
    > Hi,
    >
    >
    >
    > Something in VPP (most likely my UPF/GTP plugin) is causing an abnormal 
high virtual memory consumption. `show memory` without and with memory-trace 
enable can not explain it.
    >
    >
    >
    > With some 10k sessions I see as much as 1TB virtual memory usage, but 
only about 200MB or so residential memory. `show memory` also only reports 
about 200MB.
    >
    >
    >
    > My best guess is that something requests virtual memory pages from the OS 
and does not return them.
    >
    > Any hint on how to track this down? What in VPP could request that much 
virtual memory without actually using it?
    >
    >
    >
    > Many thanks
    >
    > Andreas
    >
    >
    >
    > --
    >
    > --
    > Dipl.-Inform. Andreas Schultz
    >
    > ----------------------- enabling your networks ----------------------
    > Travelping GmbH                     Phone:  +49-391-81 90 99 0
    > Roentgenstr. 13                     Fax:    +49-391-81 90 99 299
    > 39108 Magdeburg                     Email:  i...@travelping.com
    > GERMANY                             Web:    http://www.travelping.com
    >
    > Company Registration: Amtsgericht Stendal        Reg No.:   HRB 10578
    >
    > Geschaeftsfuehrer: Holger Winkelmann          VAT ID No.: DE236673780
    > ---------------------------------------------------------------------
    
    
    
    --
    --
    Dipl.-Inform. Andreas Schultz
    
    ----------------------- enabling your networks ----------------------
    Travelping GmbH                     Phone:  +49-391-81 90 99 0
    Roentgenstr. 13                     Fax:    +49-391-81 90 99 299
    39108 Magdeburg                     Email:  i...@travelping.com
    GERMANY                             Web:    http://www.travelping.com
    
    Company Registration: Amtsgericht Stendal        Reg No.:   HRB 10578
    Geschaeftsfuehrer: Holger Winkelmann          VAT ID No.: DE236673780
    ---------------------------------------------------------------------
    

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13003): https://lists.fd.io/g/vpp-dev/message/13003
Mute This Topic: https://lists.fd.io/mt/31557024/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to