Re: [ovs-discuss] DPDK bandwidth issue

2016-10-21 Thread Chandran, Sugesh


Regards
_Sugesh

From: Tashi Lu [mailto:dotslash...@gmail.com]
Sent: Thursday, October 20, 2016 5:11 AM
To: Chandran, Sugesh 
Cc: geza.ge...@gmail.com; discuss 
Subject: Re: [ovs-discuss] DPDK bandwidth issue

Thanks Sugesh, But would you please help me further with why dpdkbond affects 
bandwidth? With the bond the bandwidth is only around 30Mbps, configurations 
are shown in previous post.
[Sugesh] I will test this out and get back to you. I don’t have any performance 
numbers for bond ports at the moment.


On 19 Oct 2016, at 3:46 PM, Chandran, Sugesh 
mailto:sugesh.chand...@intel.com>> wrote:


Regards
_Sugesh

From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of Zhang Qiang
Sent: Wednesday, October 19, 2016 7:55 AM
To: geza.ge...@gmail.com<mailto:geza.ge...@gmail.com>
Cc: discuss mailto:discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] DPDK bandwidth issue

Geza,
Thanks for your insight.

- What is the packet size you see these bandwidth values?
A: I've tried various packet sizes with iperf, no significant differences.

- What endpoints do you use for traffic generation?
A: The bandwidth in question was measured from host to host, no VMs involved.

Your second question got me thinking, maybe it's normal for the host's network 
performance to drop when DPDK is deployed, because DPDK runs in the userspace 
which is a gain for userspace virtual machines but not for the host?
[Sugesh] Yes, The packet to the host network handled by ovs-vswitchd main 
thread , not the PMD, which implies low performance when compared to the ports 
managed by PMD

What about the bond problem? I've tried active-backup and balance-slb modes, 
and balance-tcp is not supported by the physical switch, none of them change 
the situation.

On 10/19/2016 06:04 AM, Geza Gemes 
mailto:geza.ge...@gmail.com>> wrote:
> On 10/19/2016 05:37 AM, Zhang Qiang wrote:
>> Hi all,
>>
>> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
>> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
>> enabled, especially with dpdkbond.
>>
>> With the following setup, the bandwidth is only around 30Mbits/s:
>> > ovs-vsctl show
>> 72b1bac3-0f7d-40c9-9b84-cabeff7f5521
>> Bridge "ovsbr0"
>> Port dpdkbond
>> Interface "dpdk1"
>> type: dpdk
>> Interface "dpdk0"
>> type: dpdk
>> Port "ovsbr0"
>> tag: 112
>> Interface "ovsbr0"
>> type: internal
>> ovs_version: "2.5.90"
>>
>> With the bond removed and by only using dpdk0, the bandwidth is around
>> 850Mbits/s, still much lower than the performance of bare ovs which
>> nearly reaches the hardware limit of 1000Mbps.
>>
>> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
>> using 100% CPU:
>> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
>> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
>> (100% CPU usage)
>>
>> I understand that dpdk PMD threads use cores to poll, but is it normal
>> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
>>
>> I've also tried to pin PMD threads to different cores other than
>> ovs-vswtichd's to eliminate possible impacts, didn't help.
>>
>> What am I doing wrong? Thanks.
>>
>>
>>
>> ___
>> discuss mailing list
>> discuss@openvswitch.org<mailto:discuss@openvswitch.org>
>> http://openvswitch.org/mailman/listinfo/discuss
>
>Hi,
>
>A number of questions:
>
>- What is the packet size you see these bandwidth values?
>
>- What endpoints do you use for traffic generation? In order to benefit
>from DPDK you have to have the ports of your VM set up as dpdkvhostuser
>ports (and have them backed by hugepages). Otherwise the traffic will
>undergo additional userspace<->kernel copying.
>
>Using 100% CPU for the poll mode threads is the expected behavior. Also
>in order to achieve best performance please make sure, that no other
>processes will be scheduled to the cores allocated for DPDK.
>
>Cheers,
>
>Geza
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


Re: [ovs-discuss] DPDK bandwidth issue

2016-10-19 Thread Tashi Lu
Thanks Sugesh, But would you please help me further with why dpdkbond affects 
bandwidth? With the bond the bandwidth is only around 30Mbps, configurations 
are shown in previous post. 


> On 19 Oct 2016, at 3:46 PM, Chandran, Sugesh  
> wrote:
> 
>  
>  
> Regards
> _Sugesh
>  
> From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of Zhang 
> Qiang
> Sent: Wednesday, October 19, 2016 7:55 AM
> To: geza.ge...@gmail.com
> Cc: discuss 
> Subject: Re: [ovs-discuss] DPDK bandwidth issue
>  
> Geza, 
> Thanks for your insight.
>  
> - What is the packet size you see these bandwidth values?
> A: I've tried various packet sizes with iperf, no significant differences.
>  
> - What endpoints do you use for traffic generation?
> A: The bandwidth in question was measured from host to host, no VMs involved.
>  
> Your second question got me thinking, maybe it's normal for the host's 
> network performance to drop when DPDK is deployed, because DPDK runs in the 
> userspace which is a gain for userspace virtual machines but not for the host?
> [Sugesh] Yes, The packet to the host network handled by ovs-vswitchd main 
> thread , not the PMD, which implies low performance when compared to the 
> ports managed by PMD
>  
> What about the bond problem? I've tried active-backup and balance-slb modes, 
> and balance-tcp is not supported by the physical switch, none of them change 
> the situation.
>  
> On 10/19/2016 06:04 AM, Geza Gemes  wrote:
> > On 10/19/2016 05:37 AM, Zhang Qiang wrote:
> >> Hi all,
> >> 
> >> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
> >> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
> >> enabled, especially with dpdkbond.
> >> 
> >> With the following setup, the bandwidth is only around 30Mbits/s:
> >> > ovs-vsctl show
> >> 72b1bac3-0f7d-40c9-9b84-cabeff7f5521
> >> Bridge "ovsbr0"
> >> Port dpdkbond
> >> Interface "dpdk1"
> >> type: dpdk
> >> Interface "dpdk0"
> >> type: dpdk
> >> Port "ovsbr0"
> >> tag: 112
> >> Interface "ovsbr0"
> >> type: internal
> >> ovs_version: "2.5.90"
> >> 
> >> With the bond removed and by only using dpdk0, the bandwidth is around
> >> 850Mbits/s, still much lower than the performance of bare ovs which
> >> nearly reaches the hardware limit of 1000Mbps.
> >> 
> >> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
> >> using 100% CPU:
> >> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
> >> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
> >> (100% CPU usage)
> >> 
> >> I understand that dpdk PMD threads use cores to poll, but is it normal
> >> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
> >> 
> >> I've also tried to pin PMD threads to different cores other than
> >> ovs-vswtichd's to eliminate possible impacts, didn't help.
> >> 
> >> What am I doing wrong? Thanks.
> >> 
> >> 
> >> 
> >> ___
> >> discuss mailing list
> >> discuss@openvswitch.org
> >> http://openvswitch.org/mailman/listinfo/discuss
> > 
> >Hi,
> > 
> >A number of questions:
> > 
> >- What is the packet size you see these bandwidth values?
> > 
> >- What endpoints do you use for traffic generation? In order to benefit
> >from DPDK you have to have the ports of your VM set up as dpdkvhostuser
> >ports (and have them backed by hugepages). Otherwise the traffic will
> >undergo additional userspace<->kernel copying.
> > 
> >Using 100% CPU for the poll mode threads is the expected behavior. Also
> >in order to achieve best performance please make sure, that no other
> >processes will be scheduled to the cores allocated for DPDK.
> > 
> >Cheers,
> > 
> >Geza
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


Re: [ovs-discuss] DPDK bandwidth issue

2016-10-19 Thread Chandran, Sugesh


Regards
_Sugesh

From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of Zhang Qiang
Sent: Wednesday, October 19, 2016 7:55 AM
To: geza.ge...@gmail.com
Cc: discuss 
Subject: Re: [ovs-discuss] DPDK bandwidth issue

Geza,
Thanks for your insight.

- What is the packet size you see these bandwidth values?
A: I've tried various packet sizes with iperf, no significant differences.

- What endpoints do you use for traffic generation?
A: The bandwidth in question was measured from host to host, no VMs involved.

Your second question got me thinking, maybe it's normal for the host's network 
performance to drop when DPDK is deployed, because DPDK runs in the userspace 
which is a gain for userspace virtual machines but not for the host?
[Sugesh] Yes, The packet to the host network handled by ovs-vswitchd main 
thread , not the PMD, which implies low performance when compared to the ports 
managed by PMD

What about the bond problem? I've tried active-backup and balance-slb modes, 
and balance-tcp is not supported by the physical switch, none of them change 
the situation.

On 10/19/2016 06:04 AM, Geza Gemes 
mailto:geza.ge...@gmail.com>> wrote:
> On 10/19/2016 05:37 AM, Zhang Qiang wrote:
>> Hi all,
>>
>> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
>> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
>> enabled, especially with dpdkbond.
>>
>> With the following setup, the bandwidth is only around 30Mbits/s:
>> > ovs-vsctl show
>> 72b1bac3-0f7d-40c9-9b84-cabeff7f5521
>> Bridge "ovsbr0"
>> Port dpdkbond
>> Interface "dpdk1"
>> type: dpdk
>> Interface "dpdk0"
>> type: dpdk
>> Port "ovsbr0"
>> tag: 112
>> Interface "ovsbr0"
>> type: internal
>> ovs_version: "2.5.90"
>>
>> With the bond removed and by only using dpdk0, the bandwidth is around
>> 850Mbits/s, still much lower than the performance of bare ovs which
>> nearly reaches the hardware limit of 1000Mbps.
>>
>> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
>> using 100% CPU:
>> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
>> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
>> (100% CPU usage)
>>
>> I understand that dpdk PMD threads use cores to poll, but is it normal
>> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
>>
>> I've also tried to pin PMD threads to different cores other than
>> ovs-vswtichd's to eliminate possible impacts, didn't help.
>>
>> What am I doing wrong? Thanks.
>>
>>
>>
>> ___
>> discuss mailing list
>> discuss@openvswitch.org<mailto:discuss@openvswitch.org>
>> http://openvswitch.org/mailman/listinfo/discuss
>
>Hi,
>
>A number of questions:
>
>- What is the packet size you see these bandwidth values?
>
>- What endpoints do you use for traffic generation? In order to benefit
>from DPDK you have to have the ports of your VM set up as dpdkvhostuser
>ports (and have them backed by hugepages). Otherwise the traffic will
>undergo additional userspace<->kernel copying.
>
>Using 100% CPU for the poll mode threads is the expected behavior. Also
>in order to achieve best performance please make sure, that no other
>processes will be scheduled to the cores allocated for DPDK.
>
>Cheers,
>
>Geza
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


Re: [ovs-discuss] DPDK bandwidth issue

2016-10-18 Thread Zhang Qiang
Geza,
Thanks for your insight.

- What is the packet size you see these bandwidth values?
A: I've tried various packet sizes with iperf, no significant differences.

- What endpoints do you use for traffic generation?
A: The bandwidth in question was measured from host to host, no VMs
involved.

Your second question got me thinking, maybe it's normal for the host's
network performance to drop when DPDK is deployed, because DPDK runs in the
userspace which is a gain for userspace virtual machines but not for the
host?

What about the bond problem? I've tried active-backup and balance-slb
modes, and balance-tcp is not supported by the physical switch, none of
them change the situation.

On 10/19/2016 06:04 AM, Geza Gemes  wrote:
> On 10/19/2016 05:37 AM, Zhang Qiang wrote:
>> Hi all,
>>
>> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
>> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
>> enabled, especially with dpdkbond.
>>
>> With the following setup, the bandwidth is only around 30Mbits/s:
>> > ovs-vsctl show
>> 72b1bac3-0f7d-40c9-9b84-cabeff7f5521
>> Bridge "ovsbr0"
>> Port dpdkbond
>> Interface "dpdk1"
>> type: dpdk
>> Interface "dpdk0"
>> type: dpdk
>> Port "ovsbr0"
>> tag: 112
>> Interface "ovsbr0"
>> type: internal
>> ovs_version: "2.5.90"
>>
>> With the bond removed and by only using dpdk0, the bandwidth is around
>> 850Mbits/s, still much lower than the performance of bare ovs which
>> nearly reaches the hardware limit of 1000Mbps.
>>
>> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
>> using 100% CPU:
>> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
>> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
>> (100% CPU usage)
>>
>> I understand that dpdk PMD threads use cores to poll, but is it normal
>> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
>>
>> I've also tried to pin PMD threads to different cores other than
>> ovs-vswtichd's to eliminate possible impacts, didn't help.
>>
>> What am I doing wrong? Thanks.
>>
>>
>>
>> ___
>> discuss mailing list
>> discuss@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/discuss
>
>Hi,
>
>A number of questions:
>
>- What is the packet size you see these bandwidth values?
>
>- What endpoints do you use for traffic generation? In order to benefit
>from DPDK you have to have the ports of your VM set up as dpdkvhostuser
>ports (and have them backed by hugepages). Otherwise the traffic will
>undergo additional userspace<->kernel copying.
>
>Using 100% CPU for the poll mode threads is the expected behavior. Also
>in order to achieve best performance please make sure, that no other
>processes will be scheduled to the cores allocated for DPDK.
>
>Cheers,
>
>Geza
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


Re: [ovs-discuss] DPDK bandwidth issue

2016-10-18 Thread Geza Gemes

On 10/19/2016 05:37 AM, Zhang Qiang wrote:

Hi all,

I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS 
7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk 
enabled, especially with dpdkbond.


With the following setup, the bandwidth is only around 30Mbits/s:
> ovs-vsctl show
72b1bac3-0f7d-40c9-9b84-cabeff7f5521
Bridge "ovsbr0"
Port dpdkbond
Interface "dpdk1"
type: dpdk
Interface "dpdk0"
type: dpdk
Port "ovsbr0"
tag: 112
Interface "ovsbr0"
type: internal
ovs_version: "2.5.90"

With the bond removed and by only using dpdk0, the bandwidth is around 
850Mbits/s, still much lower than the performance of bare ovs which 
nearly reaches the hardware limit of 1000Mbps.


There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs 
using 100% CPU:
2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN] 
on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132 
(100% CPU usage)


I understand that dpdk PMD threads use cores to poll, but is it normal 
for the ovs-vswitchd process to use 100% of CPU? Is this relevant?


I've also tried to pin PMD threads to different cores other than 
ovs-vswtichd's to eliminate possible impacts, didn't help.


What am I doing wrong? Thanks.



___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


Hi,

A number of questions:

- What is the packet size you see these bandwidth values?

- What endpoints do you use for traffic generation? In order to benefit 
from DPDK you have to have the ports of your VM set up as dpdkvhostuser 
ports (and have them backed by hugepages). Otherwise the traffic will 
undergo additional userspace<->kernel copying.


Using 100% CPU for the poll mode threads is the expected behavior. Also 
in order to achieve best performance please make sure, that no other 
processes will be scheduled to the cores allocated for DPDK.


Cheers,

Geza

___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss