Hi Shankar, 

What is the result and what is the difference? Also, I might’ve missed it but 
what was the vpp version in these tests? 

Regarding optimizations:
- show hardware: will tell you the numa for your nic (if you have multiple 
numas) and the rx/tx descriptor ring sizes. Typically for tcp it’s preferable 
to use a smaller number of descriptors, say 256. These are configurable under 
dpdk stanza per nic
- show run: check loops/s and vector rates per nodes. If loops/s < 10k or 
vector rate for any node > 100, results should be further inspected. 
- show error: if tcp reports lots of out of order enqueues or lots of dupacks, 
that’s a sign that something is dropping. Might be the nic if you get interface 
tx errors. 
- cpu pinning: make sure your vpp worker runs on the same numa as your nic (if 
need be). Pin iperf to same numa as vpp's worker but make sure they don’t use 
the same cpu.
- fifos: you’re now using 800kB fifos. Maybe raise those to 4MB albeit for low 
latency that shouldn’t matter much. 
- ena specific: check if write combining is enabled 
https://github.com/amzn/amzn-drivers/tree/master/userspace/dpdk#622-vfio-pci 

Regards,
Florin

> On Mar 23, 2022, at 9:47 AM, Shankar Raju <shankar.r...@nasdaq.com> wrote:
> 
> Hi Florin,
> 
> Disabling checksums worked. Now iperf is able to send and receive traffic. 
> But the transfer rate and bitrate seems to be smaller when using VPP. Could 
> you please let me know the right tuning params for getting better performance 
> with VPP ?
> 
> Thanks  
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21104): https://lists.fd.io/g/vpp-dev/message/21104
Mute This Topic: https://lists.fd.io/mt/89961794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to