Hi Lollita

adj_nbr_tables is the database that stores the adjacencies that represent the 
peers attached to on a given link. It is sized (perhaps overly) to accommodate 
a large segment on multi-access link. For your p2p GTPU interfaces, you could 
scale it down, since there is only ever one peer on a p2p link. A well placed 
call to vnet_sw_interface_is_p2p() would be your friend.


From: <vpp-dev@lists.fd.io> on behalf of lollita <lollita....@ericsson.com>
Date: Wednesday, 7 March 2018 at 11:09
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Cc: Kingwel Xie <kingwel....@ericsson.com>, David Yu Z 
<david.z...@ericsson.com>, Terry Zhang Z <terry.z.zh...@ericsson.com>, Brant 
Lin <brant....@ericsson.com>, Jordy You <jordy....@ericsson.com>
Subject: [vpp-dev] route creating performance issue because of bucket and 
memory of adj_nbr_tables


                We have encounter performance issue on batch adding 10000 GTPU 
tunnels and 10000 routes each taking one gtpu tunnel interface as nexthop via 

The effect is like executing following command:

                create gtpu tunnel src dst teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src dst teid 2 encap-vrf-id 0 decap-next 
                ip route add table 2 via gtpu_tunnel0
ip route add table 2 via gtpu_tunnel1

                After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for “ip route add” following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
                                                      "Adjacency Neighbour 

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu

Reply via email to