Hi,

I am experiencing very poor throughput performance on traffic flowing out of my 
VPP VM. I had initially thought it was due to the tap-cli interfaces which were 
relaying traffic from a kernel interface to VPP and performing CG-NAT along its 
way to the outside network. However, after implementing a simple test, it seems 
like an inherent issue of VPP performance on virtual machine.

To experiment a basic topology, I followed Intel's guide to VPP mentioned in 
this article ( 
https://software.intel.com/en-us/articles/build-a-fast-network-stack-with-vpp-on-an-intel-architecture-server
 ) which is setup as:
Server 1 [iperf3 client] <-10G-> Server 2 [VPP] <-10G-> Server 3

In the article, they are using 40G links between the servers, but I am limited 
in hardware to 10G NICs. My 3 servers are Dell R620 with dual socket Xeon 
E5-2650 8 core processors with 64GB RAM using 82599ES 10-Gigabit SFI/SFP+ NIC 
cards. I am using Ubuntu 16.04 server OS.

I followed the guide. Using kernel routing/forwarding, I achieved 9.2 Gbps on 
iperf and after binding interfaces to VPP on server 2, I also obtained 9.2-9.3 
Gbps throughput on iperf3 using TCP as per above article.

The issue comes when you replicate the same setup in form of KVM VMs using 
virt-manager/libvirt. Here is my VM topology on server 2 (Ubuntu 16.04 server 
on both host OS as well as guest OS):
VM 1 [iperf client] <-virtio interface-> VM 2 [VPP] <-virtio interface-> VM 3 
[iperf server]

Replicating the same test, using kernel routing I am getting 16 Gbps throughput 
using iperf3. After binding interface to VPP however, I achieve just 1.3 Gbps.
After following optimization guide in the above Intel article (hugepages in 
80-vpp.conf, main core, corelist-workers in startup.conf), speed was still 
1.3-1.4 Gbps.
Later, I also applied KVM optimizations as per VPP Wiki ( 
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning) ) but I 
am still getting no improvements at all.

In summary,
Without VPP: 16 Gbps throughput
With VPP: 1.3 Gbps throughput

Has anyone else encountered this much drop in performance and any workarounds 
to fix this.

Regards,
Hamid

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9571): https://lists.fd.io/g/vpp-dev/message/9571
Mute This Topic: https://lists.fd.io/mt/21793216/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Leave: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to