On 10/19/2016 05:37 AM, Zhang Qiang wrote:
Hi all,

I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk enabled, especially with dpdkbond.

With the following setup, the bandwidth is only around 30Mbits/s:
> ovs-vsctl show
Bridge "ovsbr0"
    Port dpdkbond
        Interface "dpdk1"
            type: dpdk
        Interface "dpdk0"
            type: dpdk
    Port "ovsbr0"
        tag: 112
        Interface "ovsbr0"
            type: internal
ovs_version: "2.5.90"

With the bond removed and by only using dpdk0, the bandwidth is around 850Mbits/s, still much lower than the performance of bare ovs which nearly reaches the hardware limit of 1000Mbps.

There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs using 100% CPU: 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN] on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132 (100% CPU usage)

I understand that dpdk PMD threads use cores to poll, but is it normal for the ovs-vswitchd process to use 100% of CPU? Is this relevant?

I've also tried to pin PMD threads to different cores other than ovs-vswtichd's to eliminate possible impacts, didn't help.

What am I doing wrong? Thanks.

discuss mailing list


A number of questions:

- What is the packet size you see these bandwidth values?

- What endpoints do you use for traffic generation? In order to benefit from DPDK you have to have the ports of your VM set up as dpdkvhostuser ports (and have them backed by hugepages). Otherwise the traffic will undergo additional userspace<->kernel copying.

Using 100% CPU for the poll mode threads is the expected behavior. Also in order to achieve best performance please make sure, that no other processes will be scheduled to the cores allocated for DPDK.



discuss mailing list

Reply via email to