Hi Everyone,
I am working on measuring the performance with a xconnect of two
sub-interfaces. I did see quite a few performance related questions & answers
in the community which were very helpful to get to this point. However, I’m
still facing rx and tx queue drops (“rx misses” and “tx-error”).
Here is the config :
l2 xconnect TenGigabitEthernet3/0/0.1 TenGigabitEthernet3/0/1.1
l2 xconnect TenGigabitEthernet3/0/1.1 TenGigabitEthernet3/0/0.1
I’m passing traffic which is 70% of the line rate (10G) in both directions and
I do not see any drops. Ran the traffic for 30 Min with no drops. Below is the
runtime stats. Have CPU affinity in place and “vpp_wk_0” is on dedicated
logical core 9. However, I see that the “vectors/node” is 26.03 . I was
expecting to see 255.99. Is it something that can be seen only with high burst
of traffic ? I may be missing something here and looking to understand what
that may be.
Thread 1 vpp_wk_0 (lcore 9)
Time 531.8, average vectors/node 26.03, last 128 main loops 1.31 per node 21.00
vector rates in 1.1539e6, out 1.1539e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
TenGigabitEthernet3/0/0-output active 18857661 306865019
0 1.54e2 16.27
TenGigabitEthernet3/0/0-tx active 18857661 306865019
0 2.62e2 16.27
TenGigabitEthernet3/0/1-output active 18857661 306865172
0 1.63e2 16.27
TenGigabitEthernet3/0/1-tx active 18857661 306865172
0 2.67e2 16.27
dpdk-input polling 18857661 613730191
0 4.48e2 32.55
ethernet-input active 18864470 613730191
0 6.92e2 32.53
l2-input active 18864470 613730191
0 1.15e2 32.53
l2-output active 18864470 613730191
0 1.31e2 32.53
There are two rx-queues and two tx-queues assigned to each of the 10 Gig ports.
Queue depth is 1024. Following is the queue placement:
Thread 1 (vpp_wk_0):
node dpdk-input:
TenGigabitEthernet3/0/0 queue 0 (polling)
TenGigabitEthernet3/0/0 queue 1 (polling)
TenGigabitEthernet3/0/1 queue 0 (polling)
TenGigabitEthernet3/0/1 queue 1 (polling)
Now, when I increase the rate to 75% of 10G, I am seeing drops due to “rx-miss”
DBGvpp# sho int
Name Idx State MTU (L3/IP4/IP6/MPLS)
Counter Count
TenGigabitEthernet3/0/0 1 up 9000/0/0/0 rx packets
26235935
rx bytes
39248958760
tx packets
26236104
tx bytes
39249211584
rx-miss
697
TenGigabitEthernet3/0/0.1 3 up 0/0/0/0 rx packets
26235935
rx bytes
39248958760
tx packets
26236104
tx bytes
39249211584
TenGigabitEthernet3/0/1 2 up 9000/0/0/0 rx packets
26236104
rx bytes
39249211584
tx packets
26235935
tx bytes
39248958760
rx-miss
711
TenGigabitEthernet3/0/1.1 4 up 0/0/0/0 rx packets
26236104
rx bytes
39249211584
tx packets
26235935
tx bytes
39248958760
local0 0 down 0/0/0/0
Here is the runtime stats when that happens:
Thread 1 vpp_wk_0 (lcore 9)
Time 59.0, average vectors/node 34.58, last 128 main loops 1.69 per node 27.00
vector rates in 1.2365e6, out 1.2365e6, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
TenGigabitEthernet3/0/0-output active 1682608 36482575
0 1.33e2 21.68
TenGigabitEthernet3/0/0-tx active 1682608 36482575
0 2.48e2 21.68
TenGigabitEthernet3/0/1-output active 1682608 36482560
0 1.42e2 21.68
TenGigabitEthernet3/0/1-tx active 1682608 36482560
0 2.53e2 21.68
dpdk-input polling 1682608 72965135
0 4.11e2 43.36
ethernet-input active 1691495 72965135
0 6.77e2 43.14
l2-input active 1691495 72965135
0 1.08e2 43.14
l2-output active 1691495 72965135
0 1.07e2 43.14
Would increasing the core, threads be of any help ? or Given that vector/node
is 34.58, does it mean there is still room to process more frames ?
Also, there are two Rx queues configured. Is there a command to check if they
are equally serviced ? looking to understand how the load is equally
distributed over the two rx-queues and two tx queues.
Any help to determine why this drop might be happening will be great.
Thanks,
Vijay
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12635): https://lists.fd.io/g/vpp-dev/message/12635
Mute This Topic: https://lists.fd.io/mt/30778968/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-