Re: [vpp-dev] route creating performance issue because of bucket and memory of adj_nbr_tables

2018-03-07 Thread Neale Ranns
Hi Lollita

adj_nbr_tables is the database that stores the adjacencies that represent the 
peers attached to on a given link. It is sized (perhaps overly) to accommodate 
a large segment on multi-access link. For your p2p GTPU interfaces, you could 
scale it down, since there is only ever one peer on a p2p link. A well placed 
call to vnet_sw_interface_is_p2p() would be your friend.

Regards,
neale


From:  on behalf of lollita 
Date: Wednesday, 7 March 2018 at 11:09
To: "vpp-dev@lists.fd.io" 
Cc: Kingwel Xie , David Yu Z 
, Terry Zhang Z , Brant 
Lin , Jordy You 
Subject: [vpp-dev] route creating performance issue because of bucket and 
memory of adj_nbr_tables

Hi,

We have encounter performance issue on batch adding 1 GTPU 
tunnels and 1 routes each taking one gtpu tunnel interface as nexthop via 
API.

The effect is like executing following command:

create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 2 encap-vrf-id 0 decap-next 
ip4
ip route add 1.1.1.1/32 table 2 via gtpu_tunnel0
ip route add 1.1.1.2/32 table 2 via gtpu_tunnel1

After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for “ip route add” following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
  "Adjacency Neighbour 
table",
  
ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS,
  
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE);

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu




[vpp-dev] route creating performance issue because of bucket and memory of adj_nbr_tables

2018-03-07 Thread lollita
Hi,

We have encounter performance issue on batch adding 1 GTPU 
tunnels and 1 routes each taking one gtpu tunnel interface as nexthop via 
API.

The effect is like executing following command:

create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 2 encap-vrf-id 0 decap-next 
ip4
ip route add 1.1.1.1/32 table 2 via gtpu_tunnel0
ip route add 1.1.1.2/32 table 2 via gtpu_tunnel1

After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for "ip route add" following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
  "Adjacency Neighbour 
table",
  
ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS,
  
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE);

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu