What I realised while I too observed similar lower perf with dpdk was, because of TSO.

In case of VM just launched with virtio-net driver, most of segmentation offload settings are turned on and notably TSO. Due to this, tcp could throttle for larger segment even more than mtu 1500 (i.e 1448 data). Where as, in case of vhost-net offloaded through vhost-user, it seems TSO is fixed as "off" and can not be turned on. This limits tcp segment length not to exceed mtu. This also means, DPDK would gain
on lower packet sizes (as shown below).

I disabled TSO in LINUX BR and OVS (2.3.1) scenarios and we see ovs-dpdk's winning figures. But that need not be preferable way of launching guests on vhost-net normally.

Here are my observations:

     LINUX BR   OVS (v2.3.1)   1xPMD    4xPMD    8xPMD

BW   6599         6367            2409     2270      1642
RR   1674         1830            13296     14680      12541

<AFTER DISABLING TSO IN VHOST-NET>

BW   1326         1507            2409     2270      1642
RR   1597         1830            13296     14680      12541

I had taken care of isolating cpus to run PMD threads as above. Earlier, there was some
suggestions on other alternatives to do with MTU.

https://www.mail-archive.com/discuss@openvswitch.org/msg13912.html

It may help. Then I thought of making use of upstream multiqueue support in dpdk but it seems not yet supported yet in ovs. I am also curious to find ovs-dpdk tuned for more
perf than ovs.

Regards,
Gowrishankar

On Tuesday 14 July 2015 05:55 PM, Traynor, Kevin wrote:

*From:*discuss [mailto:discuss-boun...@openvswitch.org] *On Behalf Of *Na Zhu
*Sent:* Monday, July 13, 2015 3:15 AM
*To:* b...@openvswitch.org
*Subject:* [ovs-discuss] ovs-dpdk performance is not good

Dear all,

I want to use ovs-dpdk to improve my nfv performance. But when i compare the throughput between standard ovs and ovs-dpdk, the ovs is better, does anyone know why?

I use netperf to test the throughput.

use vhost-net to test standard ovs.

use vhost-user to test ovs-dpdk.

My topology is as follow:

内嵌图片 1

The result is that standard ovs performance is better. Throughput unit Mbps.

内嵌图片 2

内嵌图片 3

[kt] I would check your core affinitization to ensure that the vswitchd

pmd is on a separate core to the vCPUs (set with other_config:pmd-cpu-mask).

Also, this test is not using the DPDK vitrio PMD in the guest which provides

performance gains.

What packet sizes are you using? you should see a greater gain from DPDK

at lower packet sizes (i.e. more PPS)



_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to