Fwiw, not VPP-specific, but just based on a previous experience: when observing the drops at anomalously low PPS levels too early in the processing of the packet, I would check the interframe gap on the traffic generator set for that test to ensure you are not generating dense bursts of traffic.
As an illustration: 1000 packets sent within the first millisecond of a second is still "only" a thousand PPS when viewed on a per-second basis as an "average", yet the immediate packet rate imposed on the UUT during the moment of this microburst is 1Mpps. Not sure if this is what is happening here, of course, there might be more issues with the same symptoms... --a > On 17 Jul 2017, at 18:43, Andrew Li (zhaoxili) <zhaox...@cisco.com> wrote: > > Sorry, I take back my word on “most likely it’s not related to VPP itself”. > Since VPP fetch vectors from the NIC’s RX ring, it’d be possible that the NIC > run out of space before VPP could poll enough vectors out. > But it’s still strange in this case, as 8 vectors/Call means that VPP is not > working hard at all. So, it should be unlikely that the packets are dropped. > Please follow the other thread with the VPP experts…Sorry for the possible > confusion I brought to you. > > Thanks, > Andrew > > > On 17/07/2017, 3:56 PM, "vpp-dev-boun...@lists.fd.io on behalf of Andrew Li > (zhaoxili)" <vpp-dev-boun...@lists.fd.io on behalf of zhaox...@cisco.com> > wrote: > > Hi Xiang, > > My best guess is that the rx-miss field comes from DPDK, quote from their > datasheet: > > Packets are missed when the receive FIFO has insufficient space to > store the incoming packet. This might be caused due to insufficient > buffers allocated, or because there is insufficient bandwidth on the > IO bus. > > And most likely it’s not related to VPP itself. > > Thanks, > Andrew > > > On 17/07/2017, 9:59 AM, "vpp-dev-boun...@lists.fd.io on behalf of 陈祥" > <vpp-dev-boun...@lists.fd.io on behalf of chenxn...@163.com> wrote: > > Hi, guys > Last week I did some performance test on VPP with Spirent testcenter. > Below I give two groups of running data in two different situation using the > 'show run' command . > In both situation VPP drop packets ( from the 'rx-miss' output in > 'show int' command). In my opinion, when VPP begins to drop packets, the > value of Vectors/Call should be high. So I think situation 1 is easy to > understand. But the running > data of situation 2 make me confused because Vectors/Call is still > very low. > Can anyone explains why and what's the relationship between VPP > performance and average vectors per node? > --------------------------------------------------------situation > 1------------------------------------------------------------------- > Thread 1 vpp_wk_0 (lcore 1) > Time 225.3, average vectors/node 255.99, last 128 main loops 0.00 per > node 0.00 > vector rates in 2.2244e6, out 2.2242e6, drop 2.0019e2, punt 0.0000e0 > Name State Calls Vectors > Suspends Clocks Vectors/Call > > TenGigabitEthernet6/0/1-output active 1957460 > 501107294 0 8.67e0 255.99 > TenGigabitEthernet6/0/1-tx active 1957460 > 501107294 0 1.08e2 255.99 > dpdk-input polling 1227897513 > 501152389 0 4.65e2 .41 > error-drop active 182 > 45103 0 4.15e1 247.82 > ip4-arp active 138 > 34616 0 3.43e1 250.84 > ip4-drop active 44 > 10487 0 8.91e0 238.34 > ip4-input-no-checksum active 1957630 > 501152389 0 3.77e1 255.99 > ip4-lookup active 1957630 > 501152389 0 2.98e1 255.99 > ip4-mimic-classify active 1957630 > 501152389 0 2.13e2 255.99 > ip4-rewrite active 1957452 > 501107286 0 2.96e1 255.99 > lookup-ip4-src active 1957630 > 501152389 0 5.07e1 255.99 > snat-in2out-fast active 1957630 > 501152389 0 2.56e2 255.99 > > --------------------------------------------------------situation > 2------------------------------------------------------------------- > Thread 2 vpp_wk_1 (lcore 2) > Time 1024.5, average vectors/node 8.57, last 128 main loops 0.00 per > node 0.00 > vector rates in 1.2209e6, out 1.2209e6, drop 0.0000e0, punt 0.0000e0 > Name State Calls Vectors > Suspends Clocks Vectors/Call > > TenGigabitEthernet6/0/0-output active 145982828 > 1250882588 0 2.93e1 8.57 > TenGigabitEthernet6/0/0-tx active 145982828 > 1250882588 0 1.37e2 8.57 > dpdk-input polling 9106470948 > 1250882588 0 1.31e3 .14 > ip4-input-no-checksum active 145982828 > 1250882588 0 7.39e1 8.57 > ip4-lookup active 145982828 > 1250882588 0 7.56e1 8.57 > ip4-mimic-classify active 145982828 > 1250882588 0 1.31e2 8.57 > ip4-rewrite active 145982828 > 1250882588 0 6.33e1 8.57 > snat-out2in-fast active 145982828 > 1250882588 0 4.50e1 8.57 > > > > > > > > > > > > _______________________________________________ > vpp-dev mailing list > vpp-dev@lists.fd.io > https://lists.fd.io/mailman/listinfo/vpp-dev > > _______________________________________________ > vpp-dev mailing list > vpp-dev@lists.fd.io > https://lists.fd.io/mailman/listinfo/vpp-dev _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev