Hi Lollita adj_nbr_tables is the database that stores the adjacencies that represent the peers attached to on a given link. It is sized (perhaps overly) to accommodate a large segment on multi-access link. For your p2p GTPU interfaces, you could scale it down, since there is only ever one peer on a p2p link. A well placed call to vnet_sw_interface_is_p2p() would be your friend.
Regards, neale From: <email@example.com> on behalf of lollita <lollita....@ericsson.com> Date: Wednesday, 7 March 2018 at 11:09 To: "firstname.lastname@example.org" <email@example.com> Cc: Kingwel Xie <kingwel....@ericsson.com>, David Yu Z <david.z...@ericsson.com>, Terry Zhang Z <terry.z.zh...@ericsson.com>, Brant Lin <brant....@ericsson.com>, Jordy You <jordy....@ericsson.com> Subject: [vpp-dev] route creating performance issue because of bucket and memory of adj_nbr_tables Hi, We have encounter performance issue on batch adding 10000 GTPU tunnels and 10000 routes each taking one gtpu tunnel interface as nexthop via API. The effect is like executing following command: create gtpu tunnel src 184.108.40.206 dst 220.127.116.11 teid 1 encap-vrf-id 0 decap-next ip4 create gtpu tunnel src 18.104.22.168 dst 22.214.171.124 teid 2 encap-vrf-id 0 decap-next ip4 ip route add 126.96.36.199/32 table 2 via gtpu_tunnel0 ip route add 188.8.131.52/32 table 2 via gtpu_tunnel1 After debugging, the time is mainly cost on init adj_nbr_tables[nh_proto][sw_if_index] for “ip route add” following function: BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index], "Adjacency Neighbour table", ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS, ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE); We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS (64*64) to 64, and change the fourth parameter from ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has been reduced to about one ninth of original result. The question is what adj_nbr_tables is used for? Why it need so many buckets and memory? BR/Lollita Liu