Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-20 Thread Mooney, Sean K


From: Stokes, Ian
Sent: Thursday, April 19, 2018 9:51 PM
To: michael me <1michaelmesgu...@gmail.com>
Cc: ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com>
Subject: RE: [ovs-discuss] ovs-dpdk performance not stable

Hi Michael,

“It will be split between queues based on des tip so it’s important that test 
traffic varies if you want traffic to be dispersed evenly among the queues at 
this level."

“Des tip” should be destination IP (apologies, I replied before having a 
morning coffee ☺).

By varying the traffic I mean changing the destination IP, if using the same IP 
I believe the rss hash will evaluate to the same queue  on the NIC.

I’m not an expert on Openstack so I’m not too sure how to enable multi queue 
for vhost interfaces in that case.

@ Sean (cc’d): Is there a specific way to enable vhost multi queue for open 
stack?
[Mooney, Sean K] yes to enable vhost multi queue in openstack you need to set 
an image metadata key to request it. that will result in 1 queue per vCPU of 
the guest.
The key should be defiend here 
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json
 but its missing the key you want to add
hw_vif_mutliqueue_enabled its is documented here 
https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html. I should 
probably open a bug to add it to the glance
default metadata refs.

I haven’t run MQ with a single PMD, so I’m not sure why you have better 
performance. Leave this with me to investigate further. I suspect as you have 
multiple queues more traffic is enqueued at the NIC leval
[Mooney, Sean K] for kernel virtio-net in the guest I belive there is a 
performance improvement due to reduction in intenal contentiuon from locks in 
the guset kernel networking stack but with dpdk in the guest I think the
Perfromace would normally be the same however if the bottleneck you are hitting 
is on vswitch tx to the guest then perhaps that will also benefit form 
multiqueu howver unless you guest has more queues/cores then host
pmds you would still have to use spin locks in the vhost pmd as you clould not 
setup a 1:1 pmd mapping to allow lockless enqueue in to the guest.

The problem with only 1 queue for the VM is that is creates a bottleneck in 
terms of transmitting traffic from the host to the VM (in your case 8 queues 
trying to enqueue to 1 queue).

How are you isolating core 0? Are you using isolcpus? Normally I would suggest 
isolating core 2 (i.e. the pmd core) with isolcpu.

When you say you set txq =1 , why is that?

Typically txq is set automatically, it will be number of PMDs +1 (in your case 
2 txqs in total). The +1 is to account for traffic from kernel space.

Thanks
Ian

From: michael me [mailto:1michaelmesgu...@gmail.com]
Sent: Thursday, April 19, 2018 7:12 PM
To: Stokes, Ian <ian.sto...@intel.com<mailto:ian.sto...@intel.com>>
Cc: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] ovs-dpdk performance not stable

Hi Ian,

Thank you for you answers!

it is correct that i am using ovs-vsctl set Interface dpdk0 options:n_rxq=8 
commands for the queues.
Could you please expand on the sentence "  It will be split between queues 
based on des tip so it’s important that test traffic varies if you want traffic 
to be dispersed evenly among the queues at this level."
It might be a typo, or i might just not know what you mean by "des tip", could 
you please clarify for me?
Additionally, what do you mean by varying the traffic? do you mean to somehow 
not have the packets at a constant frame rate?

Regarding the Vhost user queues, i am using Openstack and i did not find yet a 
way to create multiple queues (i updated the image's metadata 
hw_vif_multiqueue_enabled=true) but i don't know how to set the queue amount 
especially that in the VM that i am running i do not have ethtool.

Regarding the multiple queues while using one core for the PMD:
i did get much better performance when i had two cores for the PMD, however, i 
am not at the luxury to be able to use two cores.
It is puzzling for me that when i use multiple queues i do get better 
performance not enough but much better then when i use only one.
I am sorry but this is a confusing for me.

As for the core isolation, i have only core zero isolated for the kernel. i 
checked with htop and i saw that probably the emulatorpin of the VM might be 
running there so i moved it but it decreased performance.
when i use only n_rxq and n_txq=1 i get performance close to 60MB with 64 
packets.

Thank you again,
Michael





On Thu, Apr 19, 2018 at 11:10 AM, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
Hi Michael,

So there are a few issues here we need to address.

Queues for phy devices:

I assume you have set the queues for dpdk0 and dpdk1 yourself using

ovs-vsctl set Interface dpdk0 options:n_rxq=8
ovs-vsctl set Interface dpdk0 option

Re: [ovs-discuss] Tx/Rx count not increasing OVS-DPDK

2017-11-30 Thread Mooney, Sean K


From: abhishek jain [mailto:ashujain9...@gmail.com]
Sent: Thursday, November 30, 2017 5:34 AM
To: Stokes, Ian <ian.sto...@intel.com>; Ben Pfaff <b...@ovn.org>
Cc: ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com>
Subject: Re: [ovs-discuss] Tx/Rx count not increasing OVS-DPDK

Hello Team
I'm able to solve the issue.I had missed configuring Huge pages on ubuntu.
[Mooney, Sean K] Im glad you have resolved your issue. The hugepage requirement 
is documented
here for future reference 
https://docs.openstack.org/ocata/networking-guide/config-ovs-dpdk.html
unfortunetly there is currently no good way to detect this edge case due to how 
nova,neutron and os-vif interact with each
other and ovs so documenting this requirement Is the best we can do.
Thanks for the time.
Regards
Abhishek Jain

On Thu, Nov 30, 2017 at 10:25 AM, abhishek jain 
<ashujain9...@gmail.com<mailto:ashujain9...@gmail.com>> wrote:
Hi Sean,Stokes

Thanks for looking this out,below are the inline answers to your queries..


Could you provide more detail on the components you are running (OVS and DPDK 
versions etc.).

I'm using OVS version 2.4.1 on ubuntu 14.04.5 LTS.



Just to clarify, do you mean the stats for the device within the VM (i.e. your 
using something like ifconfig to check the rx/tx stats) or do you mean the OVS 
DPDK stats for the ports connected to the VMs themselves (for example vhost 
ports

With Tx/Rx count not increasing on VM,I'm referring to the VM itself and I'm 
checking the same by running ifconfig inside VM.



Do you mean your able to ping the IP of the VMs internally to them i.e. ping 
local host essentially?
Yes,I'm able to ping localhost from respective VMs.

Regards
Abhishek Jain

On Wed, Nov 29, 2017 at 11:36 PM, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
> Hi Team
>
> I'm having 2 VMs running with ovs-dpdk as a networking agent on openstack
> compute node.

Could you provide more detail on the components you are running (OVS and DPDK 
versions etc.).

> When I'm checking the external connectivity of the VMs by pinging to the
> external world,the Tx/Rx count of the VMs is not increasing.
>

Just to clarify, do you mean the stats for the device within the VM (i.e. your 
using something like ifconfig to check the rx/tx stats) or do you mean the OVS 
DPDK stats for the ports connected to the VMs themselves (for example vhost 
ports).

>
> However I'm able to ping the local-Ip of the respective Vms.

Do you mean your able to ping the IP of the VMs internally to them i.e. ping 
local host essentially?

CC'ing Sean Mooney as I'm not the most experienced with OpenStack and Sean 
might be able to help.

Thanks
Ian
>
> Let me know the possible solution for this.
> Regards
> Abhishek Jain


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-15 Thread Mooney, Sean K


> -Original Message-
> From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> Sent: Thursday, June 15, 2017 9:50 AM
> To: Mooney, Sean K <sean.k.moo...@intel.com>; dpdk-...@lists.01.org;
> us...@dpdk.org; ovs-discuss@openvswitch.org
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -----Original Message-
> > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> > Sent: Thursday, 15 June, 2017 11:24 AM
> > To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> > disc...@openvswitch.org
> > Cc: Mooney, Sean K
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -Original Message-
> > > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> > > Avi Cohen (A)
> > > Sent: Thursday, June 15, 2017 8:14 AM
> > > To: dpdk-...@lists.01.org; us...@dpdk.org;
> > > ovs-discuss@openvswitch.org
> > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected
> > > to namespace/container
> > >
> > > Hello   All,
> > > I have OVS-DPDK connected to a namespace via veth pair device.
> > >
> > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > DPDK).
> > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> OVS-
> > > DPDK 1.7 Gbps.
> > >
> > > This can be explained as follows:
> > > veth is implemented in kernel - in OVS-DPDK data is transferred
> from
> > > veth to user space while in normal OVS we save this transfer
> > [Mooney, Sean K] that is part of the reason, the other reson this is
> > slow and The main limiter to scalling adding veth pairs or ovs
> > internal port to ovs with dpdk is That these linux kernel ports are
> > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > main thread via a fall back to the non dpdk acclarated netdev
> implementation.
> > >
> > > Is there any other device to connect to namespace ? something like
> > > vhost-user ? I understand that vhost-user cannot be used for
> > > namespace
> > [Mooney, Sean K] I have been doing some experiments in this regard.
> > You should be able to use the tap, pcap or afpacket pmd to add a vedv
> > that will improve Performance. I have seen some strange issue with
> the
> > tap pmd that cause packet to be drop By the kernel on tx on some
> ports
> > but not others so there may be issues with that dirver.
> >
> > Previous experiment with libpcap seemed to work well with ovs 2.5 but
> > I have not tried it With ovs 2.7/master since the introduction of
> > generic vdev support at runtime. Previously vdevs And to be allocated
> using the dpdk args.
> >
> > I would try following the af_packet example here
> >
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680
> > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> >
> [Avi Cohen (A)]
> Thank you Mooney, Sean K
> I already tried to connect the namespace with a tap device (see 1 & 2
> below)  - and got the worst performance  for some reason the packet  is
> cut to default MTU inside the  OVS-DPDK which transmit the packet to
> its peer. - although all interfaces MTU were set to 9000.
> 
>  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 type=internal
> 
>  2. ip link set tap1 netns ns1 // attach it to namespace
[Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth 
ports 
If added to ovs will not be accelerated by dpdk unless you use a vdev to attach 
them.
> 
> I'm looking at your link to create a virtual PMD with vdev support - I
> see there a creation of a virtual PMD device , but I'm not sure how
> this is connected to the namespace ?  what device should I assign to
> the namespace ?
[Mooney, Sean K] 
You would use it as follows

ip tuntap add dev tap1 mode tap

ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
options:dpdk-devargs=eth_af_packet0,iface=tap1

ip link set tap1 netns ns1

ip netns exec ns1 ifconfig 192.168.1.1/24 up

in general though if you are using ovs-dpdk you should avoid using network 
namespace and
the kernel where possible but the above should improve you performance. One 
caveat, the amount
of vdev+phyical interfaces is limited by how dpdk is compiled by default to 32 
devices but it can be increased
to 256 if required.

> 
> Best Regards
> avi
> 
> > if you happen to be investigating this for use with openstack routers
> > we Are currently working on a way to remove the use of namespace
> > entirely for dvr when using The default neutron agent and sdn
> > controllers such as ovn already provide that functionality.
> > >
> > > Best Regards
> > > avi
> > > ___
> > > Dpdk-ovs mailing list
> > > dpdk-...@lists.01.org
> > > https://lists.01.org/mailman/listinfo/dpdk-ovs
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-15 Thread Mooney, Sean K


> -Original Message-
> From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of Avi
> Cohen (A)
> Sent: Thursday, June 15, 2017 8:14 AM
> To: dpdk-...@lists.01.org; us...@dpdk.org; ovs-discuss@openvswitch.org
> Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> Hello   All,
> I have OVS-DPDK connected to a namespace via veth pair device.
> 
> I've got a very poor performance - compared to normal OVS (i.e. no
> DPDK).
> For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps , OVS-
> DPDK 1.7 Gbps.
> 
> This can be explained as follows:
> veth is implemented in kernel - in OVS-DPDK data is transferred from
> veth to user space while in normal OVS we save this transfer
[Mooney, Sean K] that is part of the reason, the other reson this is slow and
The main limiter to scalling adding veth pairs or ovs internal port to ovs with 
dpdk is
That these linux kernel ports are not processed by the dpdk pmds. They are 
server by the
Ovs-vswitchd main thread via a fall back to the non dpdk acclarated netdev 
implementation.
> 
> Is there any other device to connect to namespace ? something like
> vhost-user ? I understand that vhost-user cannot be used for namespace
[Mooney, Sean K] I have been doing some experiments in this regard.
You should be able to use the tap, pcap or afpacket pmd to add a vedv that will 
improve
Performance. I have seen some strange issue with the tap pmd that cause packet 
to be drop
By the kernel on tx on some ports but not others so there may be issues with 
that dirver.

Previous experiment with libpcap seemed to work well with ovs 2.5 but I have 
not tried it
With ovs 2.7/master since the introduction of generic vdev support at runtime. 
Previously vdevs
And to be allocated using the dpdk args.

I would try following the af_packet example here 
https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680901a9ee9a8/Documentation/howto/dpdk.rst#vdev-support

if you happen to be investigating this for use with openstack routers we
Are currently working on a way to remove the use of namespace entirely for dvr 
when using 
The default neutron agent and sdn controllers such as ovn already provide that 
functionality.
> 
> Best Regards
> avi
> ___
> Dpdk-ovs mailing list
> dpdk-...@lists.01.org
> https://lists.01.org/mailman/listinfo/dpdk-ovs
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss