Thanks for your insight.
- What is the packet size you see these bandwidth values?
A: I've tried various packet sizes with iperf, no significant differences.
- What endpoints do you use for traffic generation?
A: The bandwidth in question was measured from host to host, no VMs
Your second question got me thinking, maybe it's normal for the host's
network performance to drop when DPDK is deployed, because DPDK runs in the
userspace which is a gain for userspace virtual machines but not for the
What about the bond problem? I've tried active-backup and balance-slb
modes, and balance-tcp is not supported by the physical switch, none of
them change the situation.
On 10/19/2016 06:04 AM, Geza Gemes <geza.ge...@gmail.com> wrote:
> On 10/19/2016 05:37 AM, Zhang Qiang wrote:
>> Hi all,
>> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
>> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
>> enabled, especially with dpdkbond.
>> With the following setup, the bandwidth is only around 30Mbits/s:
>> > ovs-vsctl show
>> Bridge "ovsbr0"
>> Port dpdkbond
>> Interface "dpdk1"
>> type: dpdk
>> Interface "dpdk0"
>> type: dpdk
>> Port "ovsbr0"
>> tag: 112
>> Interface "ovsbr0"
>> type: internal
>> ovs_version: "2.5.90"
>> With the bond removed and by only using dpdk0, the bandwidth is around
>> 850Mbits/s, still much lower than the performance of bare ovs which
>> nearly reaches the hardware limit of 1000Mbps.
>> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
>> using 100% CPU:
>> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
>> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
>> (100% CPU usage)
>> I understand that dpdk PMD threads use cores to poll, but is it normal
>> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
>> I've also tried to pin PMD threads to different cores other than
>> ovs-vswtichd's to eliminate possible impacts, didn't help.
>> What am I doing wrong? Thanks.
>> discuss mailing list
>A number of questions:
>- What is the packet size you see these bandwidth values?
>- What endpoints do you use for traffic generation? In order to benefit
>from DPDK you have to have the ports of your VM set up as dpdkvhostuser
>ports (and have them backed by hugepages). Otherwise the traffic will
>undergo additional userspace<->kernel copying.
>Using 100% CPU for the poll mode threads is the expected behavior. Also
>in order to achieve best performance please make sure, that no other
>processes will be scheduled to the cores allocated for DPDK.
discuss mailing list