Re: [ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

2019-05-07 Thread Harsh Gondaliya
So is there any way to have TSO work with OVS-DPDK? Are there any patches
which can be applied? Because I followed this Intel page and the author was
able to get the 2.5x higher throughput for OVS-DPDK as compared to native
OVS.
https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server
In fact, this topic has been discussed quite a lot in the past and many
patches have been uploaded. Are these patches already applied to OVS 2.11
or do we need to apply them separately?

Being a student and a beginner with Linux itself, I do not know how all
these patches work and how do we apply them.

I think the reason of lower throughput in the scenario of OVS-DPDK is that
>> TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs
>> are limited to the MTU of the vhostuser ports.
>>
>
> And the kernel based OVS supports TSO(GSO), the TCP packets can be up
> to 64KB, so the throughput of iperf between two VMs is much higher.
>
>
>
>
>
> 徐斌斌 xubinbin
>
>
> 软件开发工程师 Software Development Engineer
> 虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D
> Institute/Wireless Product Operation
>
>
>
> 南京市雨花台区花神大道6号中兴通讯
> 4/F, R Building, No.6 Huashen Road,
> Yuhuatai District, Nanjing, P.R. China,
> M: +86 13851437610
> E: xu.binb...@zte.com.cn
> www.zte.com.cn
> 原始邮件
> *发件人:*HarshGondaliya 
> *收件人:*ovs-discuss ;
> *日 期 :*2019年04月12日 15:34
> *主 题 :**[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS*
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
> I had connected two VMs to native OVS bridge and I got iperf test result
> of around *35-37Gbps*.
> Now when I am performing similar tests with two VMs connected to OVS-DPDK
> bridge using vhostuser ports I am getting the iperf test results as around 
> *6-6.5
> Gbps.*
> I am unable to understand the reason for such low throughput in case of
> OVS-DPDK. I am using OVS version 2.11.0
>
> I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB
> system. I have allocated 6GB for the hugepages pool. 2GB of it was given to
> OVS socket mem option and the remaining 4GB was given to Virtual machines
> for memory backing (2Gb per VM). These are some of the configurations of
> my OVS-DPDK bridge:
>
> root@dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd
> unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
> 2019-04-12T07:01:00Z|1|ovs_numa|INFO|Discovered 8 CPU cores on NUMA
> node 0
> 2019-04-12T07:01:00Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU
> cores
> 2019-04-12T07:01:00Z|3|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connecting...
> 2019-04-12T07:01:00Z|4|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connected
> 2019-04-12T07:01:00Z|5|dpdk|INFO|Using DPDK 18.11.0
> 2019-04-12T07:01:00Z|6|dpdk|INFO|DPDK Enabled - initializing...
> 2019-04-12T07:01:00Z|7|dpdk|INFO|No vhost-sock-dir provided -
> defaulting to /usr/local/var/run/openvswitch
> 2019-04-12T07:01:00Z|8|dpdk|INFO|IOMMU support for vhost-user-client
> disabled.
> 2019-04-12T07:01:00Z|9|dpdk|INFO|Per port memory for DPDK devices
> disabled.
> 2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA
> --socket-mem 2048 --socket-limit 2048.
> 2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
> 2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
> 2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket
> /var/run/dpdk/rte/mp_socket
> 2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
> 2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device :00:1f.6 on NUMA
> socket -1
> 2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default
> to 0
> 2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8
> net_e1000_em
> 2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
> 2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports recirculation
> 2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN
> header stack length probed as 1
> 2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS
> label stack length probed as 3
> 2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports truncate action
> 2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports unique flow ids
> 2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports clone action
> 2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> sample nesting level probed as 10
> 2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports eventmask in conntrack action
> 2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports ct_clear action
> 2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> dp_hash 

Re: [ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

2019-04-14 Thread xu.binbin1
I think the reason of lower throughput in the scenario of OVS-DPDK is that 
TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs


are limited to the MTU of the vhostuser ports. 






And the kernel based OVS supports TSO(GSO), the TCP packets can be up to 
64KB, so the throughput of iperf between two VMs is much higher.  























徐斌斌 xubinbin






软件开发工程师 Software Development
Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation









南京市雨花台区花神大道6号中兴通讯 
4/F, R Building, No.6 Huashen Road, 
Yuhuatai District, Nanjing, P.R. China,
M: +86 13851437610
E: xu.binb...@zte.com.cn 
www.zte.com.cn










原始邮件



发件人:HarshGondaliya 
收件人:ovs-discuss ;
日 期 :2019年04月12日 15:34
主 题 :[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



I had connected two VMs to native OVS bridge and I got iperf test result of 
around 35-37Gbps.Now when I am performing similar tests with two VMs connected 
to OVS-DPDK bridge using vhostuser ports I am getting the iperf test results as 
around 6-6.5 Gbps.
I am unable to understand the reason for such low throughput in case of 
OVS-DPDK. I am using OVS version 2.11.0


I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB system. 
I have allocated 6GB for the hugepages pool. 2GB of it was given to OVS socket 
mem option and the remaining 4GB was given to Virtual machines for memory 
backing (2Gb per VM). These are some of the configurations of my OVS-DPDK 
bridge:


root@dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd 
unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
2019-04-12T07:01:00Z|1|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 0
2019-04-12T07:01:00Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU cores
2019-04-12T07:01:00Z|3|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connecting...
2019-04-12T07:01:00Z|4|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connected
2019-04-12T07:01:00Z|5|dpdk|INFO|Using DPDK 18.11.0
2019-04-12T07:01:00Z|6|dpdk|INFO|DPDK Enabled - initializing...
2019-04-12T07:01:00Z|7|dpdk|INFO|No vhost-sock-dir provided - defaulting to 
/usr/local/var/run/openvswitch
2019-04-12T07:01:00Z|8|dpdk|INFO|IOMMU support for vhost-user-client 
disabled.
2019-04-12T07:01:00Z|9|dpdk|INFO|Per port memory for DPDK devices disabled.
2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA --socket-mem 
2048 --socket-limit 2048.
2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket 
/var/run/dpdk/rte/mp_socket
2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device :00:1f.6 on NUMA 
socket -1
2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8 net_e1000_em
2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports recirculation
2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN header 
stack length probed as 1
2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label 
stack length probed as 3
2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports truncate action
2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports unique flow ids
2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports clone action
2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max sample 
nesting level probed as 10
2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports eventmask in conntrack action
2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_clear action
2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max dp_hash 
algorithm probed to be 1
2019-04-12T07:01:00Z|00029|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_state
2019-04-12T07:01:00Z|00030|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_zone
2019-04-12T07:01:00Z|00031|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_mark
2019-04-12T07:01:00Z|00032|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_label
2019-04-12T07:01:00Z|00033|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_state_nat
2019-04-12T07:01:00Z|00034|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_orig_tuple
2019-04-12T07:01:00Z|00035|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_orig_tuple6
2019-04-12T07:01:00Z|00036|dpdk|INFO|VHOST_CONFIG: vhost-user