Re: [ovs-discuss] I have two problem for help,thank you

2017-02-07 Thread Stokes, Ian
Hi,

OVS can be deployed in userspace with DPDK.

You can follow the installation guide in documents to do this

https://github.com/openvswitch/ovs/blob/master/Documentation/intro/install/dpdk.rst

https://github.com/openvswitch/ovs/blob/master/Documentation/howto/dpdk.rst

As regards features in DPDK, vlan & bonding are supported. QoS is limited to 
ingress/egress policing currently.

There is no support for NAT in userspace but there is ongoing work to introduce 
this on the following patchset

https://mail.openvswitch.org/pipermail/ovs-dev/2017-January/327900.html

Hope This helps.

Regards
Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of chenhw
Sent: Tuesday, February 7, 2017 1:25 PM
To: ovs-discuss 
Subject: [ovs-discuss] I have two problem for help,thank you

dear everyone:
   I will make a firewall base ovs + dpdk
 for research.
   I have two problem to consult.
   The one,now ovs if supports dpdk mode,
 I don't see dpdk in the features.
   The another,at ovs + dpdk mode,ovs if
support vlan,bonding, qos, nat.

   I hope I can receive your answer.
   thank you very much.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how to tell devstack to run my openvswitch version

2017-02-08 Thread Stokes, Ian
> -Original Message-
> From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
> Sent: Wednesday, February 8, 2017 1:19 PM
> To: ovs-discuss@openvswitch.org
> Subject: [ovs-discuss] how to tell devstack to run my openvswitch version
> 
> Hi,
> I have my version of openvswitch
> How can I tell devstack/openstack to run my version ?

>From what I'm aware Openstack/Devstack supports both kernel Open vSwitch and 
>Open vSwitch deployed in userspace with DPDK. Is your version one of these? 
>Guides are available for both:

http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html

https://software.intel.com/en-us/articles/using-open-vswitch-and-dpdk-with-neutron-in-devstack

If your OVS is different to these then its probably best to ask the Open Stack 
community mailing list for guidance.

Ian

> Regards avi
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK with no DPDK nics on NUMA 0

2017-01-17 Thread Stokes, Ian
> This article https://software.intel.com/en-us/articles/using-open-vswitch-
> and-dpdk-with-neutron-in-devstack says: "If you have the NICs installed
> entirely on a NUMA Node other than 0, you will encounter a bug that will
> prevent correct OVS setup. You may wish to move your NIC device to a
> different PCIe slot."
> 
> Anyone know what this bug is, and is it in OVS-DPDK?  "correct OVS setup"
> suggests that it might just be a bug in the devstack script.

I think this bug is related to the fact that the ovs PMD coremask value defined 
in the local.conf is 0x4 by default, which is a cpu 3 on socket 0 in this case. 
If the NIC is attached to NUMA node 1 then it will fail to initialize I believe.

I think this is still an issue today for OVS-DPDK, but there are plans to 
enable cross numa pmd configurations (although one should note that there would 
be a performance penalty for crossing the numa nodes).

There is a work around mentioned in the guide under the 'Additional OVS/DPDK 
Options of Note', essentially you may change the pmd coremask to be a cpu on 
numa node 1.

Thanks
Ian

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Centralizing OVS-DPDK Blogs

2017-03-28 Thread Stokes, Ian
Hi All,

There are a number of useful blogs maintained by Intel for OVS-DPDK. These 
range from simple 'how-to' articles for specific feature to more technical deep 
dives of how features work under the hood.

Currently these blogs are hosted on the Intel Developer Zone (and will continue 
to be), but there have been suggestions that it could be useful to expose them 
in a more central manner to OVS users.

This could include one or more of the following

1. Expand existing OVS docs with blog content.
2. Adding the blogs to the OVS cookbook.
3. Creating a news feed for OVS using something like the Planet feed reader for 
the OVS webpage. (http://www.planetplanet.org/)

This issue was raised at the OVS-DPDK community meeting and it was decided to 
kick off a community discussion for more input before taking action.

Do people feel centralizing OVS-DPDK content to the OVS project would be useful 
or is the status quo preferred?

For anyone interested in the type of content available I've added the links 
below.

https://software.intel.com/en-us/articles/open-vswitch-with-dpdk-overview
https://software.intel.com/en-us/articles/rate-limiting-configuration-and-usage-for-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/qos-configuration-and-usage-for-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/configure-vhost-user-multiqueue-for-ovs-with-dpdk
https://software.intel.com/en-us/articles/vhost-user-numa-awareness-in-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/dpdk-pdump-in-open-vswitch-with-dpdk
https://01.org/openstack/blogs/stephenfin/2016/enabling-ovs-dpdk-openstack
https://software.intel.com/en-us/articles/jumbo-frames-in-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/vhost-user-client-mode-in-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier
https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier-part-2
https://software.intel.com/en-us/articles/link-aggregation-configuration-and-usage-in-open-vswitch-with-dpdk
https://software.intel.com/en-us/articles/analyzing-open-vswitch-with-dpdk-bottlenecks-using-vtune-amplifier
https://software.intel.com/en-us/articles/using-open-vswitch-and-dpdk-with-neutron-in-devstack
https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-for-inter-vm-nfv-applications
https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-on-ubuntu

Regards
Ian





___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK

2017-03-21 Thread Stokes, Ian
Apologies for top posting,

Hi Advith, there seems to be a mismatch of OVS versions and expected features 
from the details you have provided.

To confirm you are using OVS 2.6.1? Is there a specific commit ID you are 
using? Or are you using the 2.6.1 tag or release package?

From the commands you have provided it looks like your trying to add dpdk ports 
with arbitrary names and PCI addresses.

This is not possible in OVS 2.6.1, this ability was only added as of OVS 2.7.0.

If you wanted to add 2 physical dpdk ports with 2.6.1 can you try the following

ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk

Note the name of the dpdk port must start with dpdk followed by the number 
being added (starting at 0). This is a limitation that has since been removed 
in OVS 2.7.

I’m surprised that you say you can use

ovs-vsctl add-port br0 dpdk-p1 -- set Interface dpdk-p1 type=dpdk

Testing OVS 2.6.1 tag on master with that command returned the vsctl error

ovs-vsctl: Error detected while setting up 'dpdk-p1'.  See ovs-vswitchd log for 
details.

for myself which would be expected.

As regards the use of

options:dpdk-devargs=:00:0a.0

It is only needed if you are using arbitrary port naming/hotplug support which 
is included in OVS 2.7.0.

If you can give these suggestions a shot it will help narrow down the issue at 
hand.

Regards
Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Advith Nagappa
Sent: Tuesday, March 21, 2017 8:12 AM
To: Darrell Ball 
Cc: disc...@openvswitch.org
Subject: Re: [ovs-discuss] OVS-DPDK

I don’t see the pci memory mapping logs here.
Can you also attach the full dpdk logs ?

I have attached the dpdk-log.

For PCI mapping, may below helps:



Bus info  Device  Class  Description

pci@:00:07.0  ens7network
XL710/X710 Virtual Function
pci@:00:09.0  networkEthernet 
Controller X710 for 10GbE   
   SFP+
pci@:00:0a.0  networkEthernet 
Controller X710 for 10GbE   
  SFP+
 br0 networkEthernet interface
ovs-netdev  networkEthernet interface

additionally,

Network devices using DPDK-compatible driver

:00:09.0 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=i40e
:00:0a.0 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=i40e

Network devices using kernel driver
===
:00:07.0 'XL710/X710 Virtual Function' if=ens7 drv=i40evf unused=igb_uio 
*Active*

Other network devices
=


Crypto devices using DPDK-compatible driver
===


Crypto devices using kernel driver
==


Other crypto devices





What is your kernel and dpdk versions ?

Kernel - 4.4.0-66-generic
DPDK - 16.11.1


Can you share which portion of the following configuration you followed
http://docs.openvswitch.org/en/latest/intro/install/dpdk/


Install DPDK - used 1 and 3 (avoided shared lib config)

Install OVS  - As IS

Hugepages - used 1 G huge page. Passed "default_hugepagesz=1G hugepagesz=1G 
hugepages=1" as boot time parameter.

grep -i huge /proc/meminfo
AnonHugePages: 14336 kB
HugePages_Total:   5
HugePages_Free:4
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:1048576 kB


VFIO : Did not use Vfio. Used Igb_uio.

Setup OVS: As is. Did not specify pmd-cpu-mask.
Validating: As is, with the :00:09.0 and :00:0a.0 as device address. 
Fails.




Appreciate your inputs. Look forward to your response.

Best Regards
Advith

On Tue, Mar 21, 2017 at 7:50 AM, Darrell Ball 
> wrote:


From: Darrell Ball >
Date: Monday, March 20, 2017 at 10:12 PM
To: Advith Nagappa >, 
Ben Pfaff >
Cc: "disc...@openvswitch.org" 
>
Subject: Re: [ovs-discuss] OVS-DPDK



From: 
>
 on behalf of Advith Nagappa 
>
Date: Monday, March 20, 2017 at 8:19 PM
To: Ben Pfaff >
Cc: "disc...@openvswitch.org" 
>
Subject: Re: [ovs-discuss] 

Re: [ovs-discuss] Unable to configure OVS with DPDK in CentOS 7.2.1511

2017-05-29 Thread Stokes, Ian
Hi Mohan,

OVS 2.5.2 is not compatible with DPDK 16.11.1. OVS 2.5 can be used with DPDK 
2.2.

The mapping of supported OVS and DPDK versions are available in the releases 
document for OVS (see link below).

http://docs.openvswitch.org/en/latest/faq/releases/

If DPDK 16.11.1 is a requirement I would suggest using OVS 2.7 and following 
the build instructions in the documentation for OVS with DPDK

http://docs.openvswitch.org/en/latest/intro/install/dpdk/

Regards
Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Mohanraj Venkat
Sent: Monday, May 29, 2017 3:23 PM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Unable to configure OVS with DPDK in CentOS 7.2.1511

Hi,

I tried to configure OVS 2.5.2 with DPDK 16.11.1, but I am getting error as 
given below.


I set the build path properly and started "./configure 
--with-dpdk=$DPDK_BUILD", But I am getting this error

configure: error: cannot link with dpdk


Please help me to resolve this issue.


Thanks,
Mohan
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [dpdk-dev] Does ovs-dpdk support QoS on dpdkvhostuser port and other port?

2017-06-14 Thread Stokes, Ian
> Hi, all
> 
> Does ovs-dpdk support QoS on dpdkvhostuser port and other port, just like
> 'HTB' for kernel based ovs port?
> 
> Or will ovs-dpdk support this?

Hi Sam,

Currently ovs-dpdk does not support HTB for QoS. In terms of what can be 
applied to a vhost user port (and other dpdk port types) you can apply and 
egress policer QoS type.

OVS-DPDK ports also support ingress policing via the rate limiter interface.

Out of curiosity, what aspect of HTB QoS is of interest? (Min bandwidth 
guarantee, priority etc).

Hope this helps. 

Ian
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS Bug tracking

2017-05-02 Thread Stokes, Ian
> > At the last OVS-DPDK community sync meeting the issue of bug reporting
> > and tracking was raised. Specifically
> >
> > 1. What is currently available?
> > 2. Are there any improvements that can be made in the process?
> >
> > As it stands the process to follow is defined at:
> >
> > http://docs.openvswitch.org/en/latest/internals/bugs/
> >
> > An open vSwitch issue tracker repo has already been setup on GitHub at
> >
> > https://github.com/openvswitch/ovs-issues/issues
> >
> > From what I can see, the GitHub tracker seems to be used infrequently.
> Not sure if this is because it's not required to report bugs or that
> people are not aware of it?
> >
> > Is there a policy users should follow as regards reporting bugs on
> GitHub?
> >
> > Many groups will maintain their own internal bug tracking, I'm wondering
> would it be good practice to have people report known bugs in the GitHub
> tool?
> >
> > I see the pros and cons as follows:
> >
> > Pros
> >
> > 1. GitHub provides a common location to discuss bugs, this helps avoid
> duplication of effort as it can be easily flagged if someone is working on
> a patch for said bug.
> > 2. If used frequently then it will provide an accurate picture of what
> known issues are outstanding in OVS (easier then trawling through the
> mailing list).
> > 3. It could be used in conjunction with Patchwork to flag patches for
> review that are bug fixes. (I've read the patchwork allows a field to link
> to an external bug report url, I'm open to correction on this).
> >
> >
> > Cons
> >
> > 1. More overhead in general when creating the issue in GitHub when
> compared to reporting via email to the ML.
> >
> > What do others think? Is their value in formalizing the bug report
> process to use an external bug tracker?
> 
> As you say, there are pros and cons.
> 
> In my experience, bug trackers sometimes work well.  They can be a good
> way to keep track of issues that are outstanding and to collect
> information to figure out where those bug come from and try to find their
> root causes.  But this "best case' usually happens only when there is
> someone who considers it a priority to invest time in the bug tracker.
> Otherwise, you end up with problems caused by users who submit bug reports
> without checking for existing similar bug reports (which is often
> perfectly reasonable from the user's point of view) and who fail to follow
> up to requests for further information, by developers who consider that
> bug reports in the system can be fixed anytime they want and therefore
> there's no reason to follow up immediately, and by a general sort of rot.
> You can end up with situations where someone mentions a bug and they just
> get referred to the issue in the tracker, and no one really does anything.
> 
> The issues for simply reporting issues to a mailing list are to some
> degree the opposite.
> 
> In OVS, we have both options, but I think that the issue tracker is little
> known enough that few people actually follow it or file bugs there.
> 
> If you prefer to use the bug tracker, then it's there and I do try to
> follow along with the bugs filed there, and sometimes forwarding reports
> to the mailing list when it seems appropriate.  It's a reasonable idea to
> use it, and I don't want to discourage anyone from using it.

Thanks for the input Ben, this was discussed at the ovs-dpdk community call 
again since I first raised it and a few of the issues you mention came up.

There was confusion as to the purpose of this work so to clarify the goal of 
this is to simply be able to provide an accurate snapshot for the head of 
master in terms of known bugs.

There has been a suggestion to use the tool in with a reduced scope of ovs 
userspace bugs for a trial period and then a follow up review how useful it has 
been. If it is useful then it could be expanded to track bugs on a wider basis 
for OVS(bugs for different branches, kernel space bugs etc.) if there are 
resources to do so. If it's not useful then we can return to the way things are 
currently.

Would this idea be ok with you? In terms of putting resources towards the 
upkeep of bug tracking issues I'd be happy to help with this.

Also there was a query raised regarding the need/use of a tag with a bug ID to 
flag if a patch resolves a particular bug? Do you think that would be 
required/useful? 

Thanks
Ian
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Compilation error : lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such file or directory

2017-09-14 Thread Stokes, Ian
> Thank you ,  earlier I had enabled only 'CONFIG_RTE_BUILD_COMBINE_LIBS=y'
> , compilation is successful after enabling 'CONFIG_RTE_LIBRTE_VHOST=y'
> configuration.

Your welcome, glad it worked for you.

Ian
> 
> -Original Message-
> From: Stokes, Ian [mailto:ian.sto...@intel.com]
> Sent: Thursday, September 14, 2017 1:26 PM
> To: Ranjith Kumar D <ranjith.ku...@radisys.com>; Ben Pfaff <b...@ovn.org>
> Cc: disc...@openvswitch.org
> Subject: RE: [ovs-discuss] Compilation error : lib/netdev-dpdk.c:55:28:
> fatal error: rte_virtio_net.h: No such file or directory
> 
> > I mean the step configure was successful (./configure --with-
> > dpdk=/home/ubuntu/dpdk-2.0.0/x86_64-native-linuxapp-gcc) .
> 
> It's been a while since I've used OVS 2.4 but from what I remember with
> DPDK 2.0 it used vhost cuse and there were extra steps required to enable
> the vhost cuse interfaces in DPDK which could crop up at compilation as
> you are seeing, I'm wondering is this the issue your hitting.
> 
> Have you followed the build instruction from the 2.0 documentation,
> specifically install the fuse and fuse-devel (libfuse-dev for ubuntu) and
> then updating the config/common-linuxapp file in DPDK with the following.
> 
> CONFIG_RTE_BUILD_COMBINE_LIBS=y
> CONFIG_RTE_LIBRTE_VHOST=y
> 
> Steps to build are also available in the link below.
> 
> https://github.com/openvswitch/ovs/blob/branch-2.4/INSTALL.DPDK.md
> 
> It might be worth giving it a shot. (Please note vhost-cuse support has
> since been removed from OVS with DPDK and replaced with vhostuser)
> >
> > The GTP protocol functionality is implemented in OVS 2.4.0 code base.
> > So I have to use OVS 2.4.0 since merging the GTP packet handling code
> > to OVS
> > 2.8.0 due to lot code changes between OVS 2.4.0 and 2.8.0
> 
> A lot of work has been put in to cleaning up the DPDK aspect of OVS to
> make sure it plays nicely when hitting issues like this, normally I would
> encourage users to move to 2.8 but it seems that's not an option for you
> unfortunately.
> 
> Ian
> >
> > Regards,
> > Ranjith
> >
> > -Original Message-
> > From: Ben Pfaff [mailto:b...@ovn.org]
> > Sent: Thursday, September 14, 2017 11:45 AM
> > To: Ranjith Kumar D <ranjith.ku...@radisys.com>
> > Cc: disc...@openvswitch.org
> > Subject: Re: [ovs-discuss] Compilation error : lib/netdev-dpdk.c:55:28:
> > fatal error: rte_virtio_net.h: No such file or directory
> >
> > I'm confused, then.  How did you successfully link DPDK 2.0 to OVS
> > 2.4.0 without first compiling it?  Why is OVS 2.8 mentioned?
> >
> > On Thu, Sep 14, 2017 at 06:05:52AM +, Ranjith Kumar D wrote:
> > > Hi Ben Pfaff,
> > >
> > > I am building with OVS 2.4.0.
> > >
> > > Regards,
> > > Ranjith
> > >
> > > -Original Message-
> > > From: Ben Pfaff [mailto:b...@ovn.org]
> > > Sent: Thursday, September 14, 2017 11:33 AM
> > > To: Ranjith Kumar D <ranjith.ku...@radisys.com>
> > > Cc: disc...@openvswitch.org
> > > Subject: Re: [ovs-discuss] Compilation error :
> > > lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such file
> > > or directory
> > >
> > > On Thu, Sep 14, 2017 at 05:58:07AM +, Ranjith Kumar D wrote:
> > > > Hello All,
> > > >
> > > > We have GTP packet handling functionality with OVS 2.4.0  and it's
> > > > very hard to merge changes to OVS 2.8.0
> > > >
> > > > I have installed DPDK 2.0 and successfully linked with OVS 2.4.0,
> > > > but
> > compilation is failing with below error:
> > > >
> > > > lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such
> > > > file or directory
> > >
> > > It isn't clear whether you're building OVS 2.4 or 2.8, but OVS 2.8
> > > works
> > with DPDK 17.05.1 (not 2.0).
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Compilation error : lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such file or directory

2017-09-14 Thread Stokes, Ian
> I mean the step configure was successful (./configure --with-
> dpdk=/home/ubuntu/dpdk-2.0.0/x86_64-native-linuxapp-gcc) .

It's been a while since I've used OVS 2.4 but from what I remember with DPDK 
2.0 it used vhost cuse and there were extra steps required to enable the vhost 
cuse interfaces in DPDK which could crop up at compilation as you are seeing, 
I'm wondering is this the issue your hitting.

Have you followed the build instruction from the 2.0 documentation, 
specifically install the fuse and fuse-devel (libfuse-dev for ubuntu) and then 
updating the config/common-linuxapp file in DPDK with the following.

CONFIG_RTE_BUILD_COMBINE_LIBS=y
CONFIG_RTE_LIBRTE_VHOST=y

Steps to build are also available in the link below.

https://github.com/openvswitch/ovs/blob/branch-2.4/INSTALL.DPDK.md

It might be worth giving it a shot. (Please note vhost-cuse support has since 
been removed from OVS with DPDK and replaced with vhostuser) 
> 
> The GTP protocol functionality is implemented in OVS 2.4.0 code base. So I
> have to use OVS 2.4.0 since merging the GTP packet handling code to OVS
> 2.8.0 due to lot code changes between OVS 2.4.0 and 2.8.0

A lot of work has been put in to cleaning up the DPDK aspect of OVS to make 
sure it plays nicely when hitting issues like this, normally I would encourage 
users to move to 2.8 but it seems that's not an option for you unfortunately.

Ian
> 
> Regards,
> Ranjith
> 
> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Thursday, September 14, 2017 11:45 AM
> To: Ranjith Kumar D 
> Cc: disc...@openvswitch.org
> Subject: Re: [ovs-discuss] Compilation error : lib/netdev-dpdk.c:55:28:
> fatal error: rte_virtio_net.h: No such file or directory
> 
> I'm confused, then.  How did you successfully link DPDK 2.0 to OVS 2.4.0
> without first compiling it?  Why is OVS 2.8 mentioned?
> 
> On Thu, Sep 14, 2017 at 06:05:52AM +, Ranjith Kumar D wrote:
> > Hi Ben Pfaff,
> >
> > I am building with OVS 2.4.0.
> >
> > Regards,
> > Ranjith
> >
> > -Original Message-
> > From: Ben Pfaff [mailto:b...@ovn.org]
> > Sent: Thursday, September 14, 2017 11:33 AM
> > To: Ranjith Kumar D 
> > Cc: disc...@openvswitch.org
> > Subject: Re: [ovs-discuss] Compilation error :
> > lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such file
> > or directory
> >
> > On Thu, Sep 14, 2017 at 05:58:07AM +, Ranjith Kumar D wrote:
> > > Hello All,
> > >
> > > We have GTP packet handling functionality with OVS 2.4.0  and it's
> > > very hard to merge changes to OVS 2.8.0
> > >
> > > I have installed DPDK 2.0 and successfully linked with OVS 2.4.0, but
> compilation is failing with below error:
> > >
> > > lib/netdev-dpdk.c:55:28: fatal error: rte_virtio_net.h: No such file
> > > or directory
> >
> > It isn't clear whether you're building OVS 2.4 or 2.8, but OVS 2.8 works
> with DPDK 17.05.1 (not 2.0).
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] IPsec offload for ixgbe/i40e drivers

2017-10-15 Thread Stokes, Ian
> Thank You Ian
> I'll look into it and update you
> Best Regards
> Avi

No problem Avi,

One minor correction to my original reply, I don't the X710/XL710 i40e devices 
support ipsec offload functionality (at least not from what I can see in their 
data sheets). The Intel X540 and 82599 family of NICs do have support. Their 
datasheets have the specifics with regards to capabilities.

Thanks
Ian
> 
> > -Original Message-----
> > From: Stokes, Ian [mailto:ian.sto...@intel.com]
> > Sent: Monday, 09 October, 2017 1:10 PM
> > To: Greg Rose; Avi Cohen (A); us...@dpdk.org;
> > ovs-discuss@openvswitch.org
> > Subject: RE: [ovs-discuss] IPsec offload for ixgbe/i40e drivers
> >
> > > On 10/03/2017 05:35 AM, Avi Cohen (A) wrote:
> > > > Hi,
> > > > These Intel  NIC's:  X540, 82599, I40E - supports IPsec offload
> > > > But I don't see that the drivers  supplied by Intel - handle it
> > > > (??) Also I don't see any reference in the DPDK userspace drivers
> > > librte_pmd_ixgbe.c ..
> > > > Can someone tell if this is supported somewhere ?
> > > > Best Regards
> > > > Avi
> > >
> >
> > Hi Avi,
> >
> > The NICs do support the IPsec offload feature but currently this is
> > not supported in DPDK.
> >
> > There is ongoing work with regards the RTE_SECURITY interfaces which
> > will be used to handle this type of offload in the DPDK community . It
> > will not be just Intel nics supporting this feature and is expected
> > that all nics that do support it with DPDK will use the RTE_SECURITY
> > framework. There is an ongoing discussion on the DPDK ML with regards
> > to its design and use below
> >
> > http://dpdk.org/dev/patchwork/patch/29835/
> >
> > Currently I'm also looking at implementing IPsec (non-offload, look
> > aside only) using VPMDs and QAT devices in OVS with DPDK although this
> > work is at early stages. It may be of use to you as I would hope to
> > integrate it with offload functionality down the line.
> >
> > If you have any feedback I would be interested to hear.
> >
> > https://mail.openvswitch.org/pipermail/ovs-dev/2017-August/337919.html
> >
> > Thanks
> > Ian
> > > Maybe ask the guys at Intel?
> > >
> > > https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
> > >
> > > Regards,
> > >
> > > - Greg
> >
> > >
> > > > ___
> > > > discuss mailing list
> > > > disc...@openvswitch.org
> > > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> > > >
> > >
> > > ___
> > > discuss mailing list
> > > disc...@openvswitch.org
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] IPsec offload for ixgbe/i40e drivers

2017-10-09 Thread Stokes, Ian
> On 10/03/2017 05:35 AM, Avi Cohen (A) wrote:
> > Hi,
> > These Intel  NIC's:  X540, 82599, I40E - supports IPsec offload But I
> > don't see that the drivers  supplied by Intel - handle it (??) Also I
> > don't see any reference in the DPDK userspace drivers
> librte_pmd_ixgbe.c ..
> > Can someone tell if this is supported somewhere ?
> > Best Regards
> > Avi
> 

Hi Avi,

The NICs do support the IPsec offload feature but currently this is not 
supported in DPDK.

There is ongoing work with regards the RTE_SECURITY interfaces which will be 
used to handle this type of offload in the DPDK community . It will not be just 
Intel nics supporting this feature and is expected that all nics that do 
support it with DPDK will use the RTE_SECURITY framework. There is an ongoing 
discussion on the DPDK ML with regards to its design and use below

http://dpdk.org/dev/patchwork/patch/29835/

Currently I'm also looking at implementing IPsec (non-offload, look aside only) 
using VPMDs and QAT devices in OVS with DPDK although this work is at early 
stages. It may be of use to you as I would hope to integrate it with offload 
functionality down the line.

If you have any feedback I would be interested to hear.

https://mail.openvswitch.org/pipermail/ovs-dev/2017-August/337919.html

Thanks
Ian
> Maybe ask the guys at Intel?
> 
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
> 
> Regards,
> 
> - Greg

> 
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Tx/Rx count not increasing OVS-DPDK

2017-11-29 Thread Stokes, Ian
> Hi Team
> 
> I'm having 2 VMs running with ovs-dpdk as a networking agent on openstack
> compute node.

Could you provide more detail on the components you are running (OVS and DPDK 
versions etc.).

> When I'm checking the external connectivity of the VMs by pinging to the
> external world,the Tx/Rx count of the VMs is not increasing.
> 

Just to clarify, do you mean the stats for the device within the VM (i.e. your 
using something like ifconfig to check the rx/tx stats) or do you mean the OVS 
DPDK stats for the ports connected to the VMs themselves (for example vhost 
ports).

> 
> However I'm able to ping the local-Ip of the respective Vms.

Do you mean your able to ping the IP of the VMs internally to them i.e. ping 
local host essentially?

CC'ing Sean Mooney as I'm not the most experienced with OpenStack and Sean 
might be able to help.

Thanks
Ian
> 
> Let me know the possible solution for this.
> Regards
> Abhishek Jain
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] a bug in open switch-2.8.1

2017-12-15 Thread Stokes, Ian
Hi Yangyunlong,

Thanks for your email and work investigating the issue.

DPDK 17.11 is not supported by default for OVS 2.8.1. OVS 2.8.1 uses DPDK 
17.05.2. Typically support for new DPDK versions are not backported between 
releases so as such this is not a bug.

However support for DPDK 17.11 was up streamed recently to the master OVS 
branch with commit 5e925cc, it will be supported in the upcoming OVS 2.9 
release and includes the changes you have detailed below.

If you are interested in using 17.11 with the master branch I recommend you 
checkout commit dc92f724d4641bcf9fce95db262f314264e473af.

This commit supports 17.11 as well as a few bug fixes for running travis with 
DPDK 17.11.

Thanks
Ian



From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of ??&??
Sent: Friday, December 15, 2017 2:53 AM
To: bugs 
Subject: [ovs-discuss] a bug in open switch-2.8.1

Hello
  My name is yangyunlong,I am Chinese 。I use  openvswitch-2.8.1 with 
dpdk-17.11.tar, my system is Linux version 4.9.29, gcc version is 5.4.0. when I 
compile Open vSwitch with DPDK, I compile failed, I found a bug in Open 
vSwitch,I found  an error refer to line 2453 and 2455 of netdev-dpdk.c。pci_dev 
in line 2453 and 2455 is an element of dev_info,dev_info is a type of struct 
rte_eth_dev_info,defined in rte_ethdev.h . pci_dev in rte_eth_dev_info is a 
type of strict rte_pci_device, just declared in line 1007 of rte_ethdev.h. so  
pci_dev in dev_info is incompleted ,we can’t use its elements, but you use  
dev_info.pci_dev->is.vendor_id and dev_info.pci_dev->is.device_id in line 2453 
and 2455 of netdev-dpdk.c 。 so i compile failed. I include rte_bus_pci.h in 
front of netdev-dpdk.c 。I compile successed.
I write this email just want to help you to correct it, help Open vSwitch 
better and better 。
发自我的iPhone
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] IPsec offloading

2017-11-02 Thread Stokes, Ian
> Does the OVS support HW IPsec offload ? can OVS configure the NIC/Network
> adapter to ipsec a specific flow ?

Hi Avi, this feature isn't available in OVS currently from what I'm aware of.

Ian
> Thank You
> Avi
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] FW: Mempool issue for OVS 2.9

2018-01-26 Thread Stokes, Ian
Hi All,

Recently an issue was raised regarding the move from a single shared mempool 
model that was in place up to OVS 2.8, to a mempool per port model introduced 
in 2.9.

https://mail.openvswitch.org/pipermail/ovs-discuss/2018-January/046021.html

The per port mempool model was introduced in September 5th with commit 
d555d9bded to allow fine grain control on a per port case of memory usage.

In the 'common/shared mempool' model, ports sharing a similar MTU would all 
share the same buffer mempool (e.g. the most common example of this being that 
all ports are by default created with a 1500B MTU, and as such share the same 
mbuf mempool).

This approach had some drawbacks however. For example, with the shared memory 
pool model a user could exhaust the shared memory pool (for instance by 
requesting a large number of RXQs for a port), this would cause the vSwitch to 
crash as any remaining ports would not have the required memory to function. 
This bug was discovered and reported to the community in late 2016 
https://mail.openvswitch.org/pipermail/ovs-discuss/2016-September/042560.html.

The per port mempool patch aimed to avoid such issues by allocating a separate 
buffer mempool to each port.

An issue has been flagged on ovs-discuss, whereby memory dimensions provided 
for a given number of ports on OvS 2.6-2.8 may be insufficient to support the 
same number of ports in OvS 2.9, on account of the per-port mempool model 
without re-dimensioning extra memory. The effect of this is use case dependent 
(number of ports, RXQs, MTU settings, number of PMDs etc.) The previous 
'common-pool' model was rudimentary in estimating the mempool size and was 
marked as something that was to be improved upon. The memory allocation 
calculation for per port model was modified to take the possible configuration 
factors mentioned into account.

It's unfortunate that this came to light so close to the release code freeze - 
but better late than never as it is a valid problem to be resolved.

I wanted to highlight some options to the community as I don't think the next 
steps should be taken in isolation due to the impact this feature has.

There are a number of possibilities for the 2.9 release.

(i) Revert the mempool per port patches and return to the shared mempool model. 
There are a number of features and refactoring in place on top of the change so 
this will not be a simple revert. I'm investigating what exactly is involved 
with this currently.
(ii) Leave the per port mempool implementation as is, flag to users that memory 
requirements have increased. Extra memory may have to be provided on a per use 
case basis.
(iii) Reduce the amount of memory allocated per mempool per port. An RFC to 
this effect was submitted by Kevin but on follow up the feeling is that it does 
not resolve the issue adequately.
(iv) Introduce a feature to allow users to configure mempool as shared or on a 
per port basis: This would be the best of both worlds but given the proximity 
to the 2.9 freeze I don't think it's feasible by the end of January.

I'd appreciate peoples input on this ASAP so we can reach a consensus on the 
next steps.

Thanks
Ian

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Mempool issue for OVS 2.9

2018-01-26 Thread Stokes, Ian
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Friday, January 26, 2018 3:48 PM
> To: Ilya Maximets <i.maxim...@samsung.com>; Stokes, Ian
> <ian.sto...@intel.com>; ovs-discuss@openvswitch.org
> Cc: Flavio Leitner <f...@redhat.com>; Loftus, Ciara
> <ciara.lof...@intel.com>; Kavanagh, Mark B <mark.b.kavan...@intel.com>;
> Jan Scheurich (jan.scheur...@ericsson.com) <jan.scheur...@ericsson.com>;
> Ben Pfaff (b...@ovn.org) <b...@ovn.org>; acon...@redhat.com; Venkatesan
> Pradeep <venkatesan.prad...@ericsson.com>
> Subject: Re: Mempool issue for OVS 2.9
> 
> hOn 01/26/2018 03:16 PM, Ilya Maximets wrote:
> > On 26.01.2018 15:00, Stokes, Ian wrote:
> >> Hi All,
> >>
> >> Recently an issue was raised regarding the move from a single shared
> mempool model that was in place up to OVS 2.8, to a mempool per port model
> introduced in 2.9.
> >>
> >> https://mail.openvswitch.org/pipermail/ovs-discuss/2018-January/04602
> >> 1.html
> >>
> >> The per port mempool model was introduced in September 5th with commit
> d555d9bded to allow fine grain control on a per port case of memory usage.
> >>
> >> In the 'common/shared mempool' model, ports sharing a similar MTU would
> all share the same buffer mempool (e.g. the most common example of this
> being that all ports are by default created with a 1500B MTU, and as such
> share the same mbuf mempool).
> >>
> >> This approach had some drawbacks however. For example, with the shared
> memory pool model a user could exhaust the shared memory pool (for
> instance by requesting a large number of RXQs for a port), this would
> cause the vSwitch to crash as any remaining ports would not have the
> required memory to function. This bug was discovered and reported to the
> community in late 2016 https://mail.openvswitch.org/pipermail/ovs-
> discuss/2016-September/042560.html.
> >>
> >> The per port mempool patch aimed to avoid such issues by allocating a
> separate buffer mempool to each port.
> >>
> >> An issue has been flagged on ovs-discuss, whereby memory dimensions
> provided for a given number of ports on OvS 2.6-2.8 may be insufficient to
> support the same number of ports in OvS 2.9, on account of the per-port
> mempool model without re-dimensioning extra memory. The effect of this is
> use case dependent (number of ports, RXQs, MTU settings, number of PMDs
> etc.) The previous 'common-pool' model was rudimentary in estimating the
> mempool size and was marked as something that was to be improved upon. The
> memory allocation calculation for per port model was modified to take the
> possible configuration factors mentioned into account.
> >>
> >> It's unfortunate that this came to light so close to the release code
> freeze - but better late than never as it is a valid problem to be
> resolved.
> >>
> >> I wanted to highlight some options to the community as I don't think
> the next steps should be taken in isolation due to the impact this feature
> has.
> >>
> >> There are a number of possibilities for the 2.9 release.
> >>
> >> (i) Revert the mempool per port patches and return to the shared
> mempool model. There are a number of features and refactoring in place on
> top of the change so this will not be a simple revert. I'm investigating
> what exactly is involved with this currently.
> >
> > This looks like a bad thing to do. Shared memory pools has their own
> > issues and hides the real memory usage by each port. Also, reverting
> > seems almost impossible, we'll have to re-implement it from scratch.

I would agree, reverting isn’t as straight forward an option due to the amount 
of commits that were introduced in relation to the per port mempool feature 
over time(listed below for completeness).

netdev-dpdk: Create separate memory pool for each port: d555d9b
netdev-dpdk: fix management of pre-existing mempools:b6b26021d
netdev-dpdk: Fix mempool names to reflect socket id: f06546a
netdev-dpdk: skip init for existing mempools: 837c176
netdev-dpdk: manage failure in mempool name creation: 65056fd 
netdev-dpdk: Reword mp_size as n_mbufs: ad9b5b9
netdev-dpdk: Rename dpdk_mp_put as dpdk_mp_free: a08a115
netdev-dpdk: Fix mp_name leak on snprintf failure: ec6edc8
netdev-dpdk: Fix dpdk_mp leak in case of EEXIST: 173ef76
netdev-dpdk: Factor out struct dpdk_mp: 24e78f9
netdev-dpdk: Remove unused MAX_NB_MBUF: bc57ed9
netdev-dpdk: Fix mempool creation with large MTU: af5b0da
netdev-dpdk: Add debug appctl to get mempool information: be48173

Although a lot of these are fixes/formatting, we would have to introduce a new 
series and

Re: [ovs-discuss] segmentation fault when adding a VF in DPDK to a switch

2018-01-11 Thread Stokes, Ian
Hi Ricardo,

That’s for reporting the issue and providing the steps to reproduce.

I was able to reproduce this with an i40e VF using igb_uio.

In short it seems there is no support currently for ixgbe and i40e VF devices 
in OVS with DPDK.

There are 2  issues at play here. First is the configuration error when 
creating and starting the VF in DPDK, the second issue is the Segfault in OVS.

The configuration of the VF fails (For the i40e device at least) because of the 
expectation in DPDK that the HW_CRC stripping flag is enabled in the device 
configuration for VFs. In your logs you will see an error reporting this. By 
default this seems to be disabled for VFs in OVS.

Looking in the DPDK code this is confirmed by the following in 
i40evf_dev_configure()  which code execution hits

   │1568/* For non-DPDK PF drivers, VF has no ability to disable HW
   │1569 * CRC strip, and is implicitly enabled by the PF.
   │1570 */
   │1571if (!conf->rxmode.hw_strip_crc) {
   │1572vf = 
I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
   │1573if ((vf->version_major == VIRTCHNL_VERSION_MAJOR) &&
   │1574(vf->version_minor <= VIRTCHNL_VERSION_MINOR)) {
   │1575/* Peer is running non-DPDK PF driver. */
   │1576PMD_INIT_LOG(ERR, "VF can't disable HW CRC 
Strip");
   │1577return -EINVAL;
   │1578}

Out of interest I enabled HW_CRC in the configuration for the device manually 
in the ovs code for testing purposes. Although this allows the queue 
configuration to succeed the VF will later fail to start due to an issue with 
VSI queue mapping when DPDK attempts to start the device. I’ll have to take 
another look to see what exactly is going wrong here, I suspect there is more 
configuration needed for VFs than PFs.

The segmentation fault happens due to the error occurring during the 
dpdk_eth_dev_queue_setup() function, this is a separate issue and unrelated to 
VFs. I have seen failures in this area cause segmentation faults before in OVS 
so it’s an area that needs to be looked at again to handle DPDK errors properly 
IMO.

I hope this answers your question and I’ll follow up once I have a little more 
info on how to enable the VF functionality.

Thanks
Ian



From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Riccardo Ravaioli
Sent: Thursday, January 11, 2018 10:27 AM
To: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] segmentation fault when adding a VF in DPDK to a 
switch

Here are the steps to reproduce the issue:
1. Create one Virtual Function (VF) on a physical interface that supports 
SR-IOV (in my case it's an Intel i350 interface):
$ echo 1 > /sys/class/net/eth10/device/sriov_numvfs
2. Lookup its PCI address, for example with dpdk-devbind.py:
$ dpdk-devbind.py --status-dev net
:05:10.3 'I350 Ethernet Controller Virtual Function 1520' if=eth11 
drv=igbvf unused=igb_uio,vfio-pci,uio_pci_generic
3. Bind the VF to a DPDK-compatible driver. I'll use vfio-pci, but igb_uio too 
will reproduce the issue:
$ dpdk-devbind.py --bind=vfio-pci :05:10.3
4. Create an OVS bridge and set its datapath type to netdev:
$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
5. Add the VF to the bridge as a DPDK interface:
$ ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk 
options:dpdk-devargs=:05:10.3
6. Now ovs-vswitchd.log reports that OVS repeatedly crashes (segmentation 
fault) and restarts itself, in a loop:
2018-01-11T09:28:28.338Z|00139|dpdk|INFO|EAL: PCI device :05:10.3 on NUMA 
socket 0
2018-01-11T09:28:28.338Z|00140|dpdk|INFO|EAL:   probe driver: 8086:1520 
net_e1000_igb_vf
2018-01-11T09:28:28.338Z|00141|dpdk|INFO|EAL:   using IOMMU type 1 (Type 1)
2018-01-11T09:28:28.560Z|00142|dpdk|INFO|PMD: eth_igbvf_dev_init(): VF MAC 
address not assigned by Host PF
2018-01-11T09:28:28.560Z|00143|dpdk|INFO|PMD: eth_igbvf_dev_init(): Assign 
randomly generated MAC address c6:13:67:7b:31:6b
2018-01-11T09:28:28.560Z|00144|netdev_dpdk|INFO|Device ':05:10.3' attached 
to DPDK
2018-01-11T09:28:28.563Z|00145|dpif_netdev|INFO|PMD thread on numa_id: 0, core 
id:  3 created.
2018-01-11T09:28:28.566Z|00146|dpif_netdev|INFO|PMD thread on numa_id: 0, core 
id:  2 created.
2018-01-11T09:28:28.566Z|00147|dpif_netdev|INFO|There are 2 pmd threads on numa 
node 0
2018-01-11T09:28:28.646Z|00148|dpdk|INFO|PMD: igbvf_dev_configure(): VF can't 
disable HW CRC Strip
2018-01-11T09:28:28.646Z|00149|netdev_dpdk|ERR|Interface dpdk-p0 MTU (1500) 
setup error: Operation not supported
2018-01-11T09:28:28.646Z|00150|netdev_dpdk|ERR|Interface dpdk-p0(rxq:1 txq:1) 
configure error: Operation not supported
2018-01-11T09:28:29.062Z|2|daemon_unix(monitor)|ERR|1 crashes: pid 2494 
died, killed (Segmentation fault), core dumped, restarting

7. 

Re: [ovs-discuss] Mempool issue for OVS 2.9

2018-01-29 Thread Stokes, Ian
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Monday, January 29, 2018 8:03 AM
> To: Stokes, Ian <ian.sto...@intel.com>; Ilya Maximets
> <i.maxim...@samsung.com>; ovs-discuss@openvswitch.org
> Cc: Flavio Leitner <f...@redhat.com>; Loftus, Ciara
> <ciara.lof...@intel.com>; Kavanagh, Mark B <mark.b.kavan...@intel.com>;
> Jan Scheurich (jan.scheur...@ericsson.com) <jan.scheur...@ericsson.com>;
> Ben Pfaff (b...@ovn.org) <b...@ovn.org>; acon...@redhat.com; Venkatesan
> Pradeep <venkatesan.prad...@ericsson.com>
> Subject: Re: Mempool issue for OVS 2.9
> 
> On 01/26/2018 05:27 PM, Stokes, Ian wrote:
> >> -Original Message-
> >> From: Kevin Traynor [mailto:ktray...@redhat.com]
> >> Sent: Friday, January 26, 2018 3:48 PM
> >> To: Ilya Maximets <i.maxim...@samsung.com>; Stokes, Ian
> >> <ian.sto...@intel.com>; ovs-discuss@openvswitch.org
> >> Cc: Flavio Leitner <f...@redhat.com>; Loftus, Ciara
> >> <ciara.lof...@intel.com>; Kavanagh, Mark B
> >> <mark.b.kavan...@intel.com>; Jan Scheurich
> >> (jan.scheur...@ericsson.com) <jan.scheur...@ericsson.com>; Ben Pfaff
> >> (b...@ovn.org) <b...@ovn.org>; acon...@redhat.com; Venkatesan Pradeep
> >> <venkatesan.prad...@ericsson.com>
> >> Subject: Re: Mempool issue for OVS 2.9
> >>
> >> hOn 01/26/2018 03:16 PM, Ilya Maximets wrote:
> >>> On 26.01.2018 15:00, Stokes, Ian wrote:
> >>>> Hi All,
> >>>>
> >>>> Recently an issue was raised regarding the move from a single
> >>>> shared
> >> mempool model that was in place up to OVS 2.8, to a mempool per port
> >> model introduced in 2.9.
> >>>>
> >>>> https://mail.openvswitch.org/pipermail/ovs-discuss/2018-January/046
> >>>> 02
> >>>> 1.html
> >>>>
> >>>> The per port mempool model was introduced in September 5th with
> >>>> commit
> >> d555d9bded to allow fine grain control on a per port case of memory
> usage.
> >>>>
> >>>> In the 'common/shared mempool' model, ports sharing a similar MTU
> >>>> would
> >> all share the same buffer mempool (e.g. the most common example of
> >> this being that all ports are by default created with a 1500B MTU,
> >> and as such share the same mbuf mempool).
> >>>>
> >>>> This approach had some drawbacks however. For example, with the
> >>>> shared
> >> memory pool model a user could exhaust the shared memory pool (for
> >> instance by requesting a large number of RXQs for a port), this would
> >> cause the vSwitch to crash as any remaining ports would not have the
> >> required memory to function. This bug was discovered and reported to
> >> the community in late 2016
> >> https://mail.openvswitch.org/pipermail/ovs-
> >> discuss/2016-September/042560.html.
> >>>>
> >>>> The per port mempool patch aimed to avoid such issues by allocating
> >>>> a
> >> separate buffer mempool to each port.
> >>>>
> >>>> An issue has been flagged on ovs-discuss, whereby memory dimensions
> >> provided for a given number of ports on OvS 2.6-2.8 may be
> >> insufficient to support the same number of ports in OvS 2.9, on
> >> account of the per-port mempool model without re-dimensioning extra
> >> memory. The effect of this is use case dependent (number of ports,
> >> RXQs, MTU settings, number of PMDs
> >> etc.) The previous 'common-pool' model was rudimentary in estimating
> >> the mempool size and was marked as something that was to be improved
> >> upon. The memory allocation calculation for per port model was
> >> modified to take the possible configuration factors mentioned into
> account.
> >>>>
> >>>> It's unfortunate that this came to light so close to the release
> >>>> code
> >> freeze - but better late than never as it is a valid problem to be
> >> resolved.
> >>>>
> >>>> I wanted to highlight some options to the community as I don't
> >>>> think
> >> the next steps should be taken in isolation due to the impact this
> >> feature has.
> >>>>
> >>>> There are a number of possibilities for the 2.9 release.
> >>>>
> >>>> (i) Revert the mempool per port patches and return to 

Re: [ovs-discuss] segmentation fault when adding a VF in DPDK to a switch

2018-02-01 Thread Stokes, Ian
Hi Ricardo,

Apologies for the delay. Unfortunately with the OVS 2.9 release I haven’t had 
much time to look at this further.

At the very least I think work needs to be done for dpdk.c and netdev-dpdk.c to 
enable configuration of VFs specifically (to account for the HW_CRC and VSI 
queue configurations).

There would also be a task to ensure the work required for enabling a VF on the 
i40e driver would also cover enabling a VF for the ixgbe driver. In DPDK it’s 
been the case in the past that driver implementations for different NIC devices 
can differ.

This could be looked at in the OVS 2.10 development cycle at some point. I can 
post an update here when there is progress.

Thanks
Ian

From: scaricapo...@gmail.com [mailto:scaricapo...@gmail.com] On Behalf Of 
Riccardo Ravaioli
Sent: Thursday, January 25, 2018 4:35 PM
To: Stokes, Ian <ian.sto...@intel.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] segmentation fault when adding a VF in DPDK to a 
switch

Hi Ian,
Thanks for looking into the issue. Anything new?
Thanks a lot!
Riccardo

On 11 January 2018 at 23:50, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
Hi Ricardo,

That’s for reporting the issue and providing the steps to reproduce.

I was able to reproduce this with an i40e VF using igb_uio.

In short it seems there is no support currently for ixgbe and i40e VF devices 
in OVS with DPDK.

There are 2  issues at play here. First is the configuration error when 
creating and starting the VF in DPDK, the second issue is the Segfault in OVS.

The configuration of the VF fails (For the i40e device at least) because of the 
expectation in DPDK that the HW_CRC stripping flag is enabled in the device 
configuration for VFs. In your logs you will see an error reporting this. By 
default this seems to be disabled for VFs in OVS.

Looking in the DPDK code this is confirmed by the following in 
i40evf_dev_configure()  which code execution hits

   │1568/* For non-DPDK PF drivers, VF has no ability to disable HW
   │1569 * CRC strip, and is implicitly enabled by the PF.
   │1570 */
   │1571if (!conf->rxmode.hw_strip_crc) {
   │1572vf = 
I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
   │1573if ((vf->version_major == VIRTCHNL_VERSION_MAJOR) &&
   │1574(vf->version_minor <= VIRTCHNL_VERSION_MINOR)) {
   │1575/* Peer is running non-DPDK PF driver. */
   │1576PMD_INIT_LOG(ERR, "VF can't disable HW CRC 
Strip");
   │1577return -EINVAL;
   │1578}

Out of interest I enabled HW_CRC in the configuration for the device manually 
in the ovs code for testing purposes. Although this allows the queue 
configuration to succeed the VF will later fail to start due to an issue with 
VSI queue mapping when DPDK attempts to start the device. I’ll have to take 
another look to see what exactly is going wrong here, I suspect there is more 
configuration needed for VFs than PFs.

The segmentation fault happens due to the error occurring during the 
dpdk_eth_dev_queue_setup() function, this is a separate issue and unrelated to 
VFs. I have seen failures in this area cause segmentation faults before in OVS 
so it’s an area that needs to be looked at again to handle DPDK errors properly 
IMO.

I hope this answers your question and I’ll follow up once I have a little more 
info on how to enable the VF functionality.

Thanks
Ian



From: 
ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org> 
[mailto:ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org>]
 On Behalf Of Riccardo Ravaioli
Sent: Thursday, January 11, 2018 10:27 AM
To: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] segmentation fault when adding a VF in DPDK to a 
switch

Here are the steps to reproduce the issue:
1. Create one Virtual Function (VF) on a physical interface that supports 
SR-IOV (in my case it's an Intel i350 interface):
$ echo 1 > /sys/class/net/eth10/device/sriov_numvfs
2. Lookup its PCI address, for example with dpdk-devbind.py:
$ dpdk-devbind.py --status-dev net
:05:10.3 'I350 Ethernet Controller Virtual Function 1520' if=eth11 
drv=igbvf unused=igb_uio,vfio-pci,uio_pci_generic
3. Bind the VF to a DPDK-compatible driver. I'll use vfio-pci, but igb_uio too 
will reproduce the issue:
$ dpdk-devbind.py --bind=vfio-pci :05:10.3
4. Create an OVS bridge and set its datapath type to netdev:
$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
5. Add the VF to the bridge as a DPDK interface:
$ ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk 
options:dpdk-devargs=:05:10.3
6. Now ovs-vswitchd.log reports that OVS repeatedly crashes (segmentation 

Re: [ovs-discuss] Mempool issue for OVS 2.9

2018-04-10 Thread Stokes, Ian
> >> -Original Message-
> >> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> >> Sent: Monday, 29 January, 2018 09:35
> >> To: Jan Scheurich <jan.scheur...@ericsson.com>; Venkatesan Pradeep
> >> <venkatesan.prad...@ericsson.com>; Stokes, Ian
> >> <ian.sto...@intel.com>; d...@openvswitch.org
> >> Cc: Kevin Traynor <ktray...@redhat.com>; Flavio Leitner
> >> <f...@redhat.com>; Loftus, Ciara <ciara.lof...@intel.com>; Kavanagh,
> >> Mark B <mark.b.kavan...@intel.com>; Ben Pfaff (b...@ovn.org)
> >> <b...@ovn.org>; acon...@redhat.com; disc...@openvswitch.org
> >> Subject: Re: Mempool issue for OVS 2.9
> >>
> >> On 29.01.2018 11:19, Jan Scheurich wrote:
> >>> Hi,
> >>>
> >>> I'd like to take one step back and look at how much many mbufs we
> actually need.
> >>>
> >>> Today mbufs are consumed in the following places:
> >>>
> >>>  1. Rx queues of **physical** dpdk ports: dev->requested_n_rxq * dev-
> >requested_rxq_size
> >>> Note 1: These mbufs are hogged up at all times.
> >>> Note 2: There is little point in configuring more rx queues per
> phy port than there are PMDs to poll them.
> >>> Note 3: The rx queues of vhostuser ports exist as virtqueues in
> the guest and do not hog mbufs.
> >>>  2. One batch per PMD during processing: #PMD * NETDEV_MAX_BURST  3.
> >>> One batch per tx queue with time-based tx batching:
> >>> dev->requested_n_txq * NETDEV_MAX_BURST  4. Tx queues of **physical**
> ports: dev->requested_n_txq * expected peak tx queue fill level
> >>> Note 1:  The maximum of 2K mbufs per tx queue can only be reached
> if the OVS transmit rate exceeds the line rate for a long time.
> >> This can only happen for large packets and when the traffic
> >> originates from VMs on the compute node. This would be a case of
> >> under- dimensioning and packets would be dropped in any case. Excluding
> that scenario, a typical peak tx queue fill level would be when all PMDs
> transmit a full batch at the same time: #PMDs * NETDEV_MAX_BURST.
> >>
> >> Above assumption is wrong. Just look at ixgbe driver:
> >> drivers/net/ixgbe/ixgbe_rxtx.c: tx_xmit_pkts():
> >>
> >>/*
> >> * Begin scanning the H/W ring for done descriptors when the
> >> * number of available descriptors drops below tx_free_thresh.
> For
> >> * each done descriptor, free the associated buffer.
> >> */
> >>if (txq->nb_tx_free < txq->tx_free_thresh)
> >>┊   ixgbe_tx_free_bufs(txq);
> >>
> >> The default value for 'tx_free_thresh' is 32. So, if I'll configure
> >> number of TX descriptors to 4096, driver will start to free mbufs
> >> only when it will have more than 4063 mbufs inside its TX queue. No
> >> matter how frequent calls to send() function.
> >
> > OK, but that doesn't change my general argument. The mbufs hogged in the
> tx side of the phy port driver are coming from all ports (least likely the
> port itself). Considering them in dimensioning the port's private mempool
> is conceptually wrong. In my simplified dimensioning formula below I have
> already assumed full occupancy of the tx queue for phy ports. The second
> key observation is that vhostuser ports do not hog mbufs at all. And vhost
> zero copy doesn't change that.
> 
> Formula below maybe good for static environment. I want to change number
> of PMD threads dynamically in my deployments and this working in current
> per-port model and with oversized shared pool. If we'll try to reduce
> memory consumption of the shared pool we'll have to reconfigure all the
> devices each time we change the number of PMD threads. This would be
> really bad.
> So, size of the memory pool should not depend on dynamic characteristics
> of the datapath or other ports to avoid unexpected interrupts in traffic
> flows in case of random changes in configuration. Of course, it could
> depend on characteristics of the port itself in case of per-port model. In
> case of shared mempool model the size should only depend on static
> datapath configuration.

Hi all,

Now seems a good time to kick start this conversation again as there's a few 
patches floating around for mempools on master and 2.9.
I'm happy to work on a solution for this but before starting I'd like to agree 
on the requirements so we're all comfortable with the solution.

I see two use cases above, static and dynamic. Each have their own requirem

Re: [ovs-discuss] ovs-vswitchd high CPU when there is no load

2018-04-11 Thread Stokes, Ian
> One of the OVS-DPDK maintainers will have to speak up about the flow
> control messages.  I don't know.

Hi Michael,

can you provide the following to help debug this:

OVS Version.
DPDK Version.
NIC Model.
pmd-cpu-mask.
lcore mask.
Port configuration for all ports (including flow control, queue creation etc).


You should note that DPDK uses a Poll Mode Driver (PMD), in essence it 
continuously polls (regardless if traffic is received or not) using the core of 
which a queue of a device is assigned to. It's expected in this case that the 
assigned CPU is 100% utilized from the point of view of tools such as htop.

Can you post the output of 'ovs-appctl dpif-netdev/pmd-rxq-show' also, this 
will help debug performance. A few more comments inline below.

> 
> Do you see log messages reporting high CPU usage?  That would ordinarily
> be the case, if threads other than the PMD threads are using excessive
> CPU.
> 
> "top" and other tools can show CPU usage by thread, and OVS gives its
> threads helpful names.  Which threads are using high CPU?
> 
> On Sun, Apr 08, 2018 at 10:15:11AM +0300, michael me wrote:
> > Hi Ben,
> >
> > Thank you so much for your reply.
> > here below are some of the log from ovs-vswitchd.
> >
> > 2018-04-08T09:52:34.897Z|00333|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:34.897Z|00334|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:35.025Z|00335|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:35.025Z|00336|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:36.370Z|00337|rconn|INFO|br-int<->tcp:127.0.0.1:6633:
> > connected
> > 2018-04-08T09:52:36.370Z|00338|rconn|INFO|br-eth1<->tcp:127.0.0.1:6633:
> > connected
> > 2018-04-08T09:52:36.370Z|00339|rconn|INFO|br-eth2<->tcp:127.0.0.1:6633:
> > connected
> > 2018-04-08T09:52:37.102Z|00340|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:37.102Z|00341|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:37.225Z|00342|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:37.225Z|00343|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:37.298Z|00344|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:37.298Z|00345|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:37.426Z|00346|dpdk|WARN|Failed to enable flow control
> > on device 0 2018-04-08T09:52:37.426Z|00347|dpdk|WARN|Failed to enable
> > flow control on device 1
> > 2018-04-08T09:52:47.041Z|00348|connmgr|INFO|br-int<->tcp:127.0.0.1:663
> > 3: 7 flow_mods in the 7 s starting 10 s ago (7 adds)
> > 2018-04-08T09:52:47.245Z|00349|connmgr|INFO|br-eth1<->tcp:127.0.0.1:66
> > 33: 3 flow_mods in the 7 s starting 10 s ago (3 adds)
> > 2018-04-08T09:52:47.444Z|00350|connmgr|INFO|br-eth2<->tcp:127.0.0.1:66
> > 33: 3 flow_mods in the 7 s starting 10 s ago (3 adds)
> >
> > is the  "Failed to enable flow control on device" related to my high
> > CPU load?

This looks like your trying to enable the flow control feature on a device that 
does not support it.
Are you enabling flow control for rx or tx on your devices? you could possibly 
have auto negotiate enabled when adding the port.

For completeness the options regarding flow control are documented in 

http://docs.openvswitch.org/en/latest/howto/dpdk/

> > Just to be clear, i do get traffic through though performance is not
> > great so it does make sense that there is an issue with the flow,
> > though i don't know how to verify this.

This could be related to the queue pinning. The command 'ovs-appctl 
dpif-netdev/pmd-rxq-show' could help diagnose this if you can share its output.

Thanks
Ian

> >
> > Below are the flows that i could find:
> > root@dpdkApt:/# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):
> >  cookie=0xb6de486b197e713c, duration=640.295s, table=0, n_packets=0,
> > n_bytes=0, idle_age=802, priority=3,in_port=3,vlan_tci=0x/0x1fff
> > actions=mod_vlan_vid:2,NORMAL
> >  cookie=0xb6de486b197e713c, duration=640.188s, table=0, n_packets=0,
> > n_bytes=0, idle_age=801, priority=3,in_port=4,vlan_tci=0x/0x1fff
> > actions=mod_vlan_vid:3,NORMAL
> >  cookie=0xb6de486b197e713c, duration=647.450s, table=0, n_packets=0,
> > n_bytes=0, idle_age=911, priority=2,in_port=3 actions=drop
> > cookie=0xb6de486b197e713c, duration=647.251s, table=0, n_packets=0,
> > n_bytes=0, idle_age=910, priority=2,in_port=4 actions=drop
> > cookie=0xb6de486b197e713c, duration=647.670s, table=0, n_packets=947,
> > n_bytes=39930, idle_age=0, priority=0 actions=NORMAL
> > cookie=0xb6de486b197e713c, duration=647.675s, table=23, n_packets=0,
> > n_bytes=0, idle_age=911, priority=0 actions=drop
> > cookie=0xb6de486b197e713c, duration=647.667s, table=24, n_packets=0,
> > n_bytes=0, idle_age=911, priority=0 actions=drop
> >
> > root@dpdkApt:/# ovs-ofctl dump-flows br-eth1 NXST_FLOW reply
> > (xid=0x4):
> >  

Re: [ovs-discuss] There's no available (non-isolated) pmd thread on numa node 0, Expect reduced performance.

2018-04-12 Thread Stokes, Ian
Hi,

I was able to reproduce the issue on my system.

As you are setting all lcore and pmd core to node 1 why are you giving 1024 
memory to node 0?

I saw the same issue on my system but the warning did not appear once memory 
was allocated to node 1 only.

I would think the VM being launched is using memory for the vhost port from 
node 0, however the queue for the vhost port is assigned to core 14 which is on 
node1. When processing packets for this port it means cpu is accessing data 
across the numa nodes which causes a performance penalty, hence the warning.

To avoid you should ensure all memory and cores operate on the same node where 
possible, try using ‘other_config:dpdk-socket-mem=0,4096’ and see if you still 
see the issue.

Thanks
Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Alan Kayahan
Sent: Thursday, April 12, 2018 2:27 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] There's no available (non-isolated) pmd thread on numa 
node 0, Expect reduced performance.

Hello,

On the following setup, where all cores but 0 are isolated,

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 1 cpus: 10 11 12 13 14 15 16 17 18 19

I am trying to start OVS entirely on numa node 1 as following

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true 
other_config:dpdk-lcore-mask=0x00400 other_config:pmd-cpu-mask=0xffc00 
other_config:dpdk-socket-mem=1024,4096

However when I create a vhost port SRC to attach a VNF(via virtio) on node 0, I 
get the following

dpif_netdev|WARN|There's no available (non-isolated) pmd thread on numa node 0. 
Queue 0 on port 'SRC' will be assigned to the pmd on core 14 (numa node 1). 
Expect reduced performance.

Any ideas?

Thanks
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-17 Thread Stokes, Ian
Hi Michael,

Are you using dpdk vhostuser ports in this deployment?

I would expect to see them listed in the output of ovs-appctl 
dpif-netdev/pmd-rxq-show you posted below.

Can you describe the expected traffic flow ( Is it North/South using DPDK phy 
devices as well as vhost devices or east/west between vm interfaces only).

OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to 
specific PMDs also. This can help provide more stable throughput and defined 
behavior. Without doing this I believe the distribution of rxqs was dealt with 
in a round robin manner which could change between deployments. This could 
explain what you are seeing i.e. sometimes the traffic runs without drops.

You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic is 
dropping and then again when traffic is passing without issue. This output 
along with the flows in each case might provide a clue as to what is happening. 
If there is a difference between the two you could investigate pinning the rxqs 
to the specific setup although you will only benefit from this when have at 
least 2 PMDs instead of 1.

Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have you 
tried the same tests using the latest OVS 2.9 and DPDK 17.11?

Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of michael me
Sent: Tuesday, April 17, 2018 10:42 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] ovs-dpdk performance not stable

Hi Everyone,

I would greatly appreciate any input.

The setting that i am working with is a host with ovs-dpdk connected to a VM.

What i see when i do performance test is that after about a minute or two 
suddenly i have many drops as if the cache was full and was dumped improperly.
I tried to play with the settings of the n-rxq and n_txq values, which helps 
but only probably until the cache is filled and then i have drops.
The things is that sometimes, rarely, as if by chance the performance continues.

My settings is as follows:
OVS Version. 2.6.1
DPDK Version. 16.07.2
NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354 (rev 
03)
pmd-cpu-mask. on core 1 mask=0x2
lcore mask. core zeor "dpdk-lcore-mask=1"

Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {n_rxq="8", n_rxq_desc="2048", n_txq="9", 
n_txq_desc="2048"}

ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
port: dpdk1 queue-id: 0 1 2 3 4 5 6 7

Thanks,
Michael
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] There's no available (non-isolated) pmd thread on numa node 0, Expect reduced performance.

2018-04-16 Thread Stokes, Ian
Hi Alan,


How are you starting the VM? QEMU or Libvirt?



The dpdk vhost ports are associated with the numa node the virtqueue memory has 
been allocated on initially. So if running the VM you may want to use taskset 
–c with QEMU to allocate cores associated with numa node 1 to run the VM. If 
using libvirt try to ensure vcupin corresponds to numa 1 cores also in the xml.



In my testing to ensure that the vhost port used memory from the same socket as 
the core it’s PMD is running on I had to compile DPDK with 
CONFIG_RTE_LIBRTE_VHOST_NUMA=y. This will avoid the warning altogether 
regardless if memory is allocated to both sockets.



If you’re interested in how to test this there is a blog for using vhost numa 
aware that could be of use:



https://software.intel.com/en-us/articles/vhost-user-numa-awareness-in-open-vswitch-with-dpdk



Hope this helps.

Ian

From: Alan Kayahan [mailto:hsy...@gmail.com]
Sent: Friday, April 13, 2018 8:05 AM
To: Stokes, Ian <ian.sto...@intel.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] There's no available (non-isolated) pmd thread on 
numa node 0, Expect reduced performance.

Hi Ian,

> As you are setting all lcore and pmd core to node 1 why are you giving 1024 
> memory to node 0?
> When processing packets for this port it means cpu is accessing data across 
> the numa nodes which causes a performance penalty
I am benchmarking performance in different settings and trying to understand 
the roles of OVS and DPDK in mediating core affinity of pmds and hugepage 
utilization. Your answer helps a lot!

> try using ‘other_config:dpdk-socket-mem=0,4096’ and see if you still see the 
> issue.
But this warning should appear regardless the socket-mem allocation right? If 
my understanding is correct, when OVS pmd's are pinned to 10-19 and the VM 
tespmd app is pinned to core 2; the OVSpmd thread running on node 1 is having 
to access a huge-page on node 0 which VMtestpmd happens to access as well.

Thanks,
Alan


2018-04-12 18:28 GMT+02:00 Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>>:
Hi,

I was able to reproduce the issue on my system.

As you are setting all lcore and pmd core to node 1 why are you giving 1024 
memory to node 0?

I saw the same issue on my system but the warning did not appear once memory 
was allocated to node 1 only.

I would think the VM being launched is using memory for the vhost port from 
node 0, however the queue for the vhost port is assigned to core 14 which is on 
node1. When processing packets for this port it means cpu is accessing data 
across the numa nodes which causes a performance penalty, hence the warning.

To avoid you should ensure all memory and cores operate on the same node where 
possible, try using ‘other_config:dpdk-socket-mem=0,4096’ and see if you still 
see the issue.

Thanks
Ian

From: 
ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org> 
[mailto:ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org>]
 On Behalf Of Alan Kayahan
Sent: Thursday, April 12, 2018 2:27 AM
To: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: [ovs-discuss] There's no available (non-isolated) pmd thread on numa 
node 0, Expect reduced performance.

Hello,

On the following setup, where all cores but 0 are isolated,

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 1 cpus: 10 11 12 13 14 15 16 17 18 19

I am trying to start OVS entirely on numa node 1 as following

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true 
other_config:dpdk-lcore-mask=0x00400 other_config:pmd-cpu-mask=0xffc00 
other_config:dpdk-socket-mem=1024,4096

However when I create a vhost port SRC to attach a VNF(via virtio) on node 0, I 
get the following

dpif_netdev|WARN|There's no available (non-isolated) pmd thread on numa node 0. 
Queue 0 on port 'SRC' will be assigned to the pmd on core 14 (numa node 1). 
Expect reduced performance.

Any ideas?

Thanks

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-19 Thread Stokes, Ian
Hi Michael,

So there are a few issues here we need to address.

Queues for phy devices:

I assume you have set the queues for dpdk0 and dpdk1 yourself using

ovs-vsctl set Interface dpdk0 options:n_rxq=8
ovs-vsctl set Interface dpdk0 options:n_rxq=8

Receive Side Scaling (RSS) is used to distribute ingress traffic among the 
queues on the NIC at a hardware level. It will be split between queues based on 
des tip so it’s important that test traffic varies if you want traffic to be 
dispersed evenly among the queues at this level.

Vhost user queues:

You do not have to set the number of queues for vhost ports with n_rxq since 
OVS 2.6 as done above but you do have to include the number of supported queues 
in the QEMU command line argument that launches the VM by specifying the 
argument queues=’Num_Queues’ for the vhost port. If using VM Kernel virtio 
interfaces within the VM you will need to enable the extra queues also using 
ethtool –L. Seeing that there is only 1 queue for your vhost user port I think 
you are missing one of these steps.

PMD configuration:

Since your only using 1 PMD I don’t see much point of using multiple queues. 
Typically you match the number of PMDs to the number of queues that you would 
like to ensure an even distribution.
If  using 1 PMD like in your case the traffic will always be enqueued to queue 
0 of vhost device even if there are multiple queues available. This is related 
to the implantation within OVS.

As a starting point it might be easier to start with 2 PMDs and 2 rxqs for each 
phy and vhost ports that you have and ensure that works first.

Also are you isolating the cores the PMD runs on? If not then processes could 
be scheduled to that core which would interrupt the PMD processing, this could 
be related to the traffic drops you see.

Below is a link to a blog that discusses vhost MQ, it uses OVS 2.5 but a lot of 
the core concepts still apply even if some of the configuration commands may 
have changed

https://software.intel.com/en-us/articles/configure-vhost-user-multiqueue-for-ovs-with-dpdk

Ian

From: michael me [mailto:1michaelmesgu...@gmail.com]
Sent: Wednesday, April 18, 2018 2:23 PM
To: Stokes, Ian <ian.sto...@intel.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] ovs-dpdk performance not stable

Hi Ian,

In the deployment i do have vhost user; below is the full output of the  
ovs-appctl dpif-netdev/pmd-rxq-show  command.
root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk1 queue-id: 0 1 2 3 4 5 6 7
port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
port: vhu1cbd23fd-82queue-id: 0
port: vhu018b3f01-39queue-id: 0

what is strange for me and i don't understand is why do i have only one queue 
in the vhost side and eight on the dpdk side. i understood that qemue 
automatically had the same amount. though, i am using only one core for the VM 
and one core for the PMD.
in this setting i have eight cores in the system, is that the reason that i see 
eight possible queues?
The setup is North/South (VM to Physical network)
as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).

when i set the n_rxq and n_txq to high values (even 64 or above) i see no drops 
for around a minute or two and then suddenly bursts of drops as if the cache 
was filled. Have you seen something similar?
i tried to play with the "max-idle", but it didn't seem to help.

originally, i had a setup with 2.9 and 17.11 and i was not able to get better, 
performance but it could be that i didn't tweak as much. However, i am trying 
to deploy a setup that i can install without needing to MAKE.

Thank you for any input,
Michael

On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
Hi Michael,

Are you using dpdk vhostuser ports in this deployment?

I would expect to see them listed in the output of ovs-appctl 
dpif-netdev/pmd-rxq-show you posted below.

Can you describe the expected traffic flow ( Is it North/South using DPDK phy 
devices as well as vhost devices or east/west between vm interfaces only).

OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to 
specific PMDs also. This can help provide more stable throughput and defined 
behavior. Without doing this I believe the distribution of rxqs was dealt with 
in a round robin manner which could change between deployments. This could 
explain what you are seeing i.e. sometimes the traffic runs without drops.

You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic is 
dropping and then again when traffic is passing without issue. This output 
along with the flows in each case might provide a clue as to what is happening. 
If there is a difference between the two you could investigate pinning the rxqs 
to the specific setup although you will only benefit from this when have at 
least 2 PMDs instead 

Re: [ovs-discuss] Facing issues while using OVS-DPDK based vhostuser interface.

2018-03-06 Thread Stokes, Ian
Hi Anirudh,

Is this still an issue for you?

I tried recreating the issue on my own system following your setup steps but I 
did not encounter any issue, I was able to ping both the OVS bridge and 
external VMs  using vhostuser.

If this is still an issue could your provide the OVS logs for you system when 
the problem occurs.

Also the version of QEMU you are using in the host along with any setup steps 
you perform in the guest would be useful for debugging.

Thanks
Ian


From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Anirudh Chanagiri
Sent: Thursday, February 22, 2018 5:31 AM
To: b...@openvswitch.org
Cc: Ramana Pillalamarri ; Siddharth Mundle 
; Corredor, Alejandro 

Subject: [ovs-discuss] Facing issues while using OVS-DPDK based vhostuser 
interface.

Hello everyone,
We are having some  issues using ovs-dpdk with vhostuser interface,
Seeking your help on the same.

Description of the issue :

Setup Details:
  1.  Host is  an Ubuntu 17.04.
  2.  We have created an open OVS switch with 2 arms  , one arm of the switch 
is physical interface which is given to DPDK driver and the the other arm is 
the dpdkvhostuser interface, which is given to an ubuntu 16.04 VM.
  3.  Inside the Ubuntu VM , we are not running  any DPDK based application.
  4.  We just want to check the connectivity of the guest VM to the outside 
world using DPDK-OVS

Version details:
1.  Open v switch version.
ovs-vswitchd --version ovs-vswitchd (Open vSwitch) 2.8.0
2. Guest Ubuntu 16.04,  Kernel version 4.4.0-62-generic
3.  Host  Ubuntu 17.04, Kernel version 4.13.0-32-generic
4. DPDK version 17.05.2



   Problem description:
We are unable to have connectivity to/from the VM  using OVS-DPDK vhostuser 
interface.

1.  If we assign an ip to the OVS bridge and try pinging it, even that doesn’t 
work.
2.  If use a tun/tap interface instead of dpdkvhostuser interface the 
connectivity works fine. So the issue is specific to dpdkvhostuser interface.

Seeking your help to resolve the same.
Attaching the output of some useful commands and a setup diagram.

Regards,
Anirudh

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OvS using newer DPDK

2018-10-31 Thread Stokes, Ian
> Hello all,
> 
> I remember some time ago there was topic raised here about new LTS
> release.  I'd like to ask related question - what version of DPDK will it
> be based on?  18.11 (which is going to be new LTS release of DPDK)?
> 

Yes, the plan would be ideally to move to DPDK 18.11.

> If it is then is there anybody already working on that?

Yes, the dpdk_latest branch was setup for this purpose.

There are patches submitted to move OVS to use DPDK 18.08 first. From there a 
new set of patches will be created to move to DPDK 18.11. Once there is 
agreement and sign off from the OVS DPDK community we would look to apply those 
to the OVS master branch in time for the OVS 2.11 release.

> 
> I'm asking these questions since I've nailed the reason for getting OvS
> crashes on Marvell Armada 8K board.  They are while attempting to set MTU
> and there are some patches affecting MTU/MRU calculations that might help.

Are these patches targeted at OVS project or the DPDK project?

> So basically I might attempt to backport them or try to get OvS working
> with newer DPDK.

OVS is moving towards using DPDK LTS releases only for OVS releases and the 
master branch.

If the patches target DPDK then they could be backported to the relevant DPDK 
LTS releases. Once in place there you could also backport support to OVS 2.9 
and OVS 2.10 which use DPDK 17.11.

> Since I prefer the latter I would like to join somebody
> doing this update (I don't feel comfortable enough with OvS to do that on
> my own).

Ok sure, there is not a patch to make DPDK use 18.11 yet. That's in progress. 
I've cc'd Ophir who has been looking at this to date. Once there is a patch for 
18.11 if you could test it with the Marvell device that would be great help.

Thanks
Ian
> 
> Best regards
> Andrzej
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS bridges in docker containers segfault when dpdkvhostuser port is added.

2018-10-31 Thread Stokes, Ian
> On Thu, Oct 25, 2018 at 09:51:38PM +0200, Alan Kayahan wrote:
> > Hello,
> >
> > I have 3 OVS bridges on the same host, connected to each other as
> > br1<->br2<->br3. br1 and br3 are connected to the docker container cA
> > via dpdkvhostuser port type (I know it is deprecated, the app works
> > this way only). The DPDK app running in cA generate packets, which
> > traverse bridges br1->br2->br3, then ends up back at the DPDK app.
> > This setup works fine.
> >
> > Now I am trying to put each OVS bridge into its respective docker
> > container. I connect the containers with veth pairs, then add the veth
> > ports to the bridges. Next, I add a dpdkvhostuser port named SRC to
> > br1, so far so good. The moment I add a dpdkvhostuser port named SNK
> > to br3, ovs-vswitchd services in br1's and br3's containers segfault.
> > Following are the backtraces from each,

What version of OVS and DPDK are you using?

> >
> > --br1's container---
> >
> > [Thread debugging using libthread_db enabled] Using host libthread_db
> > library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> > Core was generated by `ovs-vswitchd
> > unix:/usr/local/var/run/openvswitch/db.sock -vconsole:emer -vsyslo'.
> > Program terminated with signal SIGSEGV, Segmentation fault.
> > #0  0x5608fa0f321b in netdev_rxq_recv (rx=0x7ff13c34ee80,
> > batch=batch@entry=0x7ff1bbb4d890) at lib/netdev.c:702
> > 702retval = rx->netdev->netdev_class->rxq_recv(rx, batch);
> > [Current thread is 1 (Thread 0x7ff1bbb4e700 (LWP 376))]
> > (gdb) bt
> > #0  0x5608fa0f321b in netdev_rxq_recv (rx=0x7ff13c34ee80,
> > batch=batch@entry=0x7ff1bbb4d890) at lib/netdev.c:702
> > #1  0x5608fa0cce65 in dp_netdev_process_rxq_port (
> > pmd=pmd@entry=0x7ff1bbb4f010, rxq=0x5608fb651be0, port_no=1)
> > at lib/dpif-netdev.c:3279
> > #2  0x5608fa0cd296 in pmd_thread_main (f_=)
> > at lib/dpif-netdev.c:4145
> > #3  0x5608fa14a836 in ovsthread_wrapper (aux_=)
> > at lib/ovs-thread.c:348
> > #4  0x7ff1c52517fc in start_thread (arg=0x7ff1bbb4e700)
> > at pthread_create.c:465
> > #5  0x7ff1c4815b5f in clone ()
> > at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> >
> > --br3's container---
> >
> > [Thread debugging using libthread_db enabled] Using host libthread_db
> > library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> > Core was generated by `ovs-vswitchd
> > unix:/usr/local/var/run/openvswitch/db.sock -vconsole:emer -vsyslo'.
> > Program terminated with signal SIGSEGV, Segmentation fault.
> > #0  0x55c517e3abcb in rte_mempool_free_memchunks () [Current
> > thread is 1 (Thread 0x7f202351f300 (LWP 647))]
> > (gdb) bt
> > #0  0x55c517e3abcb in rte_mempool_free_memchunks ()
> > #1  0x55c517e3ad46 in rte_mempool_free.part ()
> > #2  0x55c518218b78 in dpdk_mp_free (mp=0x7f603fe66a00)
> > at lib/netdev-dpdk.c:599
> > #3  0x55c518218ff0 in dpdk_mp_free (mp=)
> > at lib/netdev-dpdk.c:593
> > #4  netdev_dpdk_mempool_configure (dev=0x7f1f7ffeac00) at
> > lib/netdev-dpdk.c:629
> > #5  0x55c51821a98d in dpdk_vhost_reconfigure_helper
> (dev=0x7f1f7ffeac00)
> > at lib/netdev-dpdk.c:3599
> > #6  0x55c51821ac8b in netdev_dpdk_vhost_reconfigure
> (netdev=0x7f1f7ffebcc0)
> > at lib/netdev-dpdk.c:3624
> > #7  0x55c51813fe6b in port_reconfigure (port=0x55c51a4522a0)
> > at lib/dpif-netdev.c:3341
> > #8  reconfigure_datapath (dp=dp@entry=0x55c51a46efc0) at
> > lib/dpif-netdev.c:3822
> > #9  0x55c5181403e8 in do_add_port (dp=dp@entry=0x55c51a46efc0,
> > devname=devname@entry=0x55c51a456520 "SNK",
> > type=0x55c51834f7bd "dpdkvhostuser", port_no=port_no@entry=1)
> > at lib/dpif-netdev.c:1584
> > #10 0x55c51814059b in dpif_netdev_port_add (dpif=,
> > netdev=0x7f1f7ffebcc0, port_nop=0x7fffb4eef68c) at
> > lib/dpif-netdev.c:1610
> > #11 0x55c5181469be in dpif_port_add (dpif=0x55c51a469350,
> > netdev=netdev@entry=0x7f1f7ffebcc0,
> port_nop=port_nop@entry=0x7fffb4eef6ec)
> > at lib/dpif.c:579
> > ---Type  to continue, or q  to quit---
> > #12 0x55c5180f9f28 in port_add (ofproto_=0x55c51a464ee0,
> > netdev=0x7f1f7ffebcc0) at ofproto/ofproto-dpif.c:3645
> > #13 0x55c5180ecafe in ofproto_port_add (ofproto=0x55c51a464ee0,
> > netdev=0x7f1f7ffebcc0, ofp_portp=ofp_portp@entry=0x7fffb4eef7e8) at
> > ofproto/ofproto.c:1999
> > #14 0x55c5180d97e6 in iface_do_create (errp=0x7fffb4eef7f8,
> > netdevp=0x7fffb4eef7f0, ofp_portp=0x7fffb4eef7e8,
> > iface_cfg=0x55c51a46d590, br=0x55c51a4415b0)
> > at vswitchd/bridge.c:1799
> > #15 iface_create (port_cfg=0x55c51a46e210, iface_cfg=0x55c51a46d590,
> > br=0x55c51a4415b0) at vswitchd/bridge.c:1837
> > #16 bridge_add_ports__ (br=br@entry=0x55c51a4415b0,
> > wanted_ports=wanted_ports@entry=0x55c51a441690,
> > with_requested_port=with_requested_port@entry=true) at
> > vswitchd/bridge.c:931
> > #17 0x55c5180db87a in bridge_add_ports
> > 

Re: [ovs-discuss] time for another LTS?

2018-10-19 Thread Stokes, Ian
> On 10/18/2018 10:46 PM, Ben Pfaff wrote:
> > I've had a number of queries from folks lately about our roadmap for
> > LTS releases.  It has, indeed, been a long time since we've had a
> > long-term support release (the current LTS is 2.5).  Usually, we've
> > done LTS releases before some kind of big architectural change, etc.,
> > and so we've had no real internal pressure within the project to do it
> > for a while.  But it might be a good signal to the community to bring
> > the LTS release forward.
> >
> > What does everyone think about making the next (2.11) release an LTS?
> >
> 
> I think it's a good idea. The current LTS is quite old now, especially for
> the DPDK datapath. There is a new DPDK LTS coming out in November which
> should be in for OVS 2.11, so it would be a nice combination for a user to
> have LTS support for both.

+1

With regards backporting support for LTS releases, I take it LTS takes priority 
over non LTS branches, that would be the only difference I would think?

In fairness I think the community is pretty good as is for backporting bug 
fixes for all branches.

Ian
> 
> thanks,
> Kevin.
> 
> > Thanks,
> >
> > Ben.
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] GRE over OVS-DPDK

2018-12-05 Thread Stokes, Ian
The documentation provides a guide to configuring supported tunnels in 
userspace, I'm not aware of any reason why GRE would not work in the OVS DPDK 
deployment.

You can follow this guide below and swap VXLAN for GRE where applicable.

http://docs.openvswitch.org/en/latest/howto/userspace-tunneling/

Hope This helps.
Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Machireddy, Chenna 
Kesava Reddy (Contractor)
Sent: Monday, December 3, 2018 6:33 PM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] GRE over OVS-DPDK

Hi,
I am new to this group and looking for some help.
I want to run GRE over OVS-DPDK as a switch connected to other devices.

Is it possible to do, If yes, how?
Any help or resources would be appreciated.

Thanks
Chenna


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] Why are iperf3 udp packets out of order in OVS DPDK case?

2019-08-27 Thread Stokes, Ian



On 8/27/2019 9:35 AM, Yi Yang (杨燚)-云服务集团 wrote:

Hi, all

  


I’m doing experiments with OVS and OVS DPDK, only one bridge is there,
ports and flows are same for OVS and OVS DPDK, in OVS case, everything works
well, but in OVS DPDK case, iperf udp performance data are very poor, udp
packets are out of order, I have limited MTU and send buffer by –l1410 –
M1410, anybody knows why and how to fix it? Thank you in advance.



Hi,

can you provide more detail of you deployment? OVS version, DPDK 
version, configuration commands for ports/flows etc.


Thanks
Ian



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] include/sparse/rte_flow.h

2019-11-20 Thread Stokes, Ian



On 11/20/2019 6:02 PM, Kevin Traynor wrote:

On 19/11/2019 18:48, Ilya Maximets wrote:

On 19.11.2019 19:01, Eli Britstein wrote:


On 11/19/2019 7:46 PM, Ilya Maximets wrote:

On 19.11.2019 18:29, Eli Britstein wrote:

On 11/19/2019 7:27 PM, Eli Britstein wrote:

Hi

I see this file has many inconsistencies against the one from DPDK
(18.11.2).

For example, this API:

rte_flow_query(uint16_t port_id,
         struct rte_flow *flow,
         enum rte_flow_action_type action,
         void *data,
         struct rte_flow_error *error);

is wrong, vs the one from DPDK:

rte_flow_query(uint16_t port_id,
         struct rte_flow *flow,
         const struct rte_flow_action *action,
         void *data,
         struct rte_flow_error *error);

Note the "action" argument.


I also see in it this line:

#error "Use this header only with sparse.  It is not a correct
implementation."


So, is it wrong on purpose? If so, why?

I test my patch-set before I submit using travis, and it fails because
of this wrong file. Can we just take the correct code from DPDK?
Should I maybe take only the parts that cause me to fail?

Hi.  DPDK headers before 18.11.3 has issues that makes sparse unhappy.
This header will be removed along with upgrade to 18.11.3 or higher.
Right now we're not experiencing issues with current version of
sparse header probably just because we're not using most of the functions.

I see. Thanks.


We're not going to update this header only remove.  You may update it in
your patches or base your changes on top of dpdk-latest branch where this
header already removed.


So, what is the preferred way for submission?

1. cherry-pick those commits from dpdk-latest on top of master and my
patches on top of that


This doesn't sound like a good option.
If sparse header needs only few small changes for your patches to work,
you may create a special patch for that.  If not, you may send patches
as-is but mention that these patches depends on a DPDK 18.11.3+ and another
patch that removes the sparse header.



2. submit directly on dpdk-latest


Not sure about this option because dpdk-latest is mostly for changes that
requires most recent DPDK, but this is not exactly your case.





I'm not sure when we're going to migrate to 18.11.{3,5}.
@Ian, @Kevin, is validation still in progress?  Does anyone work on this?




Ian ran his automated tests at the time of 18.11.3 and reported results
here:
http://inbox.dpdk.org/stable/c243e0b9-bac9-9759-c51e-e40320100...@intel.com/
I ran some PVP tests also at that time but they were on rpms with some
patches, so not as relevant.

Other general 18.11.3 validation is in that thread or there is a summary
in the release notes
http://doc.dpdk.org/guides-18.11/rel_notes/release_18_11.html#id7

I don't think the changes in 18.11.4/5 will have an impact, but if Ian
is able to re-run those automated tests again, it might be best.


I was holding off moving to 18.11.3 as there was talk on a .4 (and now 
.5 due to CVE I believe), so from a validation point of view we've held 
off until it was settled. We can run validation on .5 if its the case it 
has all required cve fixes.





Is it a question of "if" or "when"? what is the purpose of migrating to
18.11.3/5 and not to 19.11 soon?


18.11.3/5 requires validation + small patch for docs/CI.
19.11 requires additional development that didn't started yet
   + validation + patch for docs/CI.

Plus, 18.11 needs to be upgraded on previous versions of OVS too.

With current speed of development and validation I will not be surprised if
19.11 will not be supported in next OVS release.


So I would think that this upgrade would go ahead, with RC3 imminent I 
think 19.11 will settle.


I know there is a few issues such as RSS offload which we're looking to 
patch and we're beginning validation now on existing features along with 
required fixes. Is there a particular issue you are aware of that would 
block the 19.11 upgrade?


Ian



Best regards, Ilya Maximets.




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS - PDUMP: Pdump initialization failure in different container

2019-09-25 Thread Stokes, Ian




On 9/16/2019 8:40 AM, Rajesh Kumar wrote:

Hi,

Sorry, Didn't complete my previous mail.



Hi Rajesh, apologies for the delay on my part in responding, I've been 
out of office the past few weeks.



The errors I was getting are
1)
root@basepdump-67b4b8-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_12_19ffcc06b54
EAL: Probing VFIO support...
EAL: Cannot initialize tailq: RTE_EVENT_RING
Tailq 0: qname:, tqh_first:(nil), tqh_last:0x7fda1b17d47c
Tailq 1: qname:, tqh_first:(nil), 
tqh_last:0x7fda1b17d4ac

Tailq 2: qname:, tqh_first:0x108064900, tqh_last:0x108064900
Tailq 3: qname:, tqh_first:(nil), tqh_last:0x7fda1b17d50c
.
EAL: FATAL: Cannot init tail queues for objects
EAL: Cannot init tail queues for objects
PANIC in main():
Cannot init EAL
5: [pdump(+0x2e2a) [0x557832863e2a]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) 
[0x7fe128dd809b]]

3: [pdump(+0x233a) [0x55783286333a]]
2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(__rte_panic+0xbd) 
[0x7fe1292b0ca5]]
1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(rte_dump_stack+0x2e) 
[0x7fe1292c65be]]

Aborted (core dumped)


2)
root@basepdump-67b4b8-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
PANIC in rte_eal_config_reattach():
Cannot mmap memory for rte_config at [(nil)], got [0x7...] - please 
use '--base-virtaddr' option

6: [./dpdk-pdump(start+0x2a) [0x559c7aa]]
5:[/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7fe128dd809b]]
4: [./dpdk-pdump(main+0xe2) [0x55597dd2]]
3: [./dpdk-pdump(rte_eal_init+0xc06) [0x55678416]]
..
Aborted (core dumped)



From the logs above it looks like the secondary process is unable to 
access the config between the pods. I'm unsure if this is possible 
myself as I haven't tried this setup before with pdump.


Can I ask if you are specifically sharing the process configs between 
the pods? Also are you sharing hugepages between the pods and if so, 
what steps were taken to ensure this?




Attached the same errors also.

I need in help in figuring out where I'm going wrong.



We'll try to recreate this in or lab setup also as in theory this should 
work.


Regards
Ian





Thanks,
Rajesh kumar S R



*From:* ovs-discuss-boun...@openvswitch.org 
 on behalf of Rajesh Kumar 


*Sent:* Monday, September 16, 2019 1:00:56 PM
*To:* ovs-discuss@openvswitch.org
*Subject:* [ovs-discuss] OVS - PDUMP: Pdump initialization failure in 
different container


In our kubernetes setup, we are running OVS in a pod with dpdk enabled.

Using 18.11.2.

I wanted to use dpdk-pdump as packet capture tool and trying to run 
pdump in separate pod.


As pdump is a secondary process, it will map to the hugepages allocated 
by primary process (OVS-vswitchd).


I'm getting these 2 errors while starting PDUMP as secondary process in 
a separate pod.




Without the container setup, I was able to bringup pdump with OVS





___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] unable to get all packets from the physical interface to OVS bridge

2019-09-25 Thread Stokes, Ian




On 9/18/2019 5:22 AM, Anish M wrote:

Hello,
Im trying to forward/mirror VM packets between two ovs-dpdk compute 
nodes. With the help of 
http://docs.openvswitch.org/en/latest/howto/userspace-tunneling/ link, i 
could able to setup additional NIC for forwarding the packet from source 
ovs-dpdk compute node to the destination compute node. I could able to 
see all the vxlan forwarded packets at the destination compute node's 
additional NIC. But the same is not visible at the ovs bridge where is 
attached that additional NIC.


Hi, can you please clarify the sentence above? Is it that you can see 
packets arrive at NIC but the packets are not seen routing through the 
bridge?




At both the compute nodes, i have same type of 2 port 10G NIC. ens1f0 & 
ens1f1.


ens1f0 is acting as DPDK port and im using it inside the openstack for 
DPDK VMs at both compute nodes.


Just to clarify, ens1f0 is a dpdk port type on each node and ens1f1 is 
not? I.e. is ens1f1 a netdev linux device?




In order to mirror DPDK VM traffic from one compute node to another 
compute node, i followed the above userspace-tunnelling link and able to 
forward VM traffic from source compute node towards the 
ens1f1(172.28.41.101) of the destination compute node.


Can you list the mirroring commands you used to set this up?



ovs-vsctl --may-exist add-br br-phy \
     -- set Bridge br-phy datapath_type=netdev \
     -- br-set-external-id br-phy bridge-id br-phy \
     -- set bridge br-phy fail-mode=standalone \
          other_config:hwaddr=48:df:37:7e:c2:08

ovs-vsctl --timeout 10 add-port br-phy ens1f1
ip addr add 172.28.41.101/24 dev br-phy
ip link set br-phy up
ip addr flush dev ens1f1 2>/dev/null
ip link set ens1f1 up

Even though i receiving all the mirrored vxlan packets at ens1f1 port 
(checked using tcpdump), the same number of packets are not available 
inside the ovs br-phy bridge (only ~10% of the mirrored traffic is 
available inside the br-phy)


Just to be aware, low volumes of traffic are fine for mirroring but at 
high volumes it would be expected that you would not see the same amount 
of traffic mirrored due to the overhead associated with mirroring. In 
this case are you sending high volumes of traffic? Have you tested with 
smaller bursts of traffic?




[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-flows br-phy
  cookie=0x3435, duration=46778.969s, table=0, n_packets=30807, 
n_bytes=3889444, priority=0 actions=NORMAL


[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-ports br-phy
OFPST_PORT reply (xid=0x2): 3 ports
   port LOCAL: rx pkts=77, bytes=5710, drop=0, errs=0, frame=0, over=0, 
crc=0

            tx pkts=30107, bytes=3604094, drop=28077, errs=0, coll=0
   port  ens1f1: rx pkts=4385104, bytes=596066321, drop=0, errs=0, 
frame=0, over=0, crc=0

            tx pkts=33606, bytes=4021526, drop=0, errs=0, coll=0
   port  "patch-tap-bint": rx pkts=30056, bytes=3591859, drop=?, errs=?, 
frame=?, over=?, crc=?

            tx pkts=739, bytes=294432, drop=?, errs=?, coll=?

In ens1f1, i could see lot of rx pkts, but in the br-phy flow im seeing 
only few packets.


Please provide any advice how i can mirror/forward packets between two 
OVS-DPDK compute nodes.


A diagram of your setup would be useful to help debug/understand the use 
case. I'm slightly confused with the mix of DPDK/non-DPDK ports you have 
vs what you want to achieve with mirroring. If you could provide more 
info on this as well as the expected flow of a packet as it is 
mirrored/forwarded it would be helpful.


Thanks
Ian


Best Regards,
Anish

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS (tag: 2.16.2) Build Fails With DPDK 21.11

2022-01-18 Thread Stokes, Ian
> Hi David,
> 
> Thanks much for your email,
> 
> We can successfully compile OVS 2.16.0 to 2.16.2 with DPDK 21.08 but it fails
> with DPDK 21.11,
Hi Syed,

As David stated below, OVS 2.16 is not expected to compile with DPDK 21.11.

Each OVS release is expected to compile against a particular DPDK Release.

These can be found in the following link

https://docs.openvswitch.org/en/latest/faq/releases/

Under the section: Q: What DPDK version does each Open vSwitch release work 
with?

If you wish to use DPDK 21.11 then I suggest you use OVS master or else the OVS 
2.17 branch (due for official release in February).

If you have a hard requirement to use OVS 2.16 then the officially validated 
and supported DPDK release for that branch is 20.11.1.

Hope this helps.

Thanks
Ian


> 
> Documentation (https://docs.openvswitch.org/en/latest/intro/install/dpdk/) or
> (https://github.com/openvswitch/ovs/blob/master/Documentation/intro/install
> /dpdk.rstmentioned) also mentions that DPDK 21.11 is supported with current /
> latest OVS version.
> 
> OVS Git master branch is up to date but tag 2.16.2 is lacking new PR's merged 
> in
> master that claims to support DPDK 21.11 such as
> https://github.com/openvswitch/ovs/commit/17346b3899d98730fc90f039a966
> b107aeae30b5
> 
> I hope above information is helpful...
> 
> Kind Regards,
> Syed Faraz Ali Shah,
> System Software Validation Engineer
> 
> -Original Message-
> From: David Marchand 
> Sent: Tuesday, January 18, 2022 3:04 PM
> To: Ali Shah, Syed Faraz 
> Cc: b...@openvswitch.org; Puzikov, DmitriiX ;
> Gibson, Joel A ; Gherghe, Calin
> ; Stokes, Ian 
> Subject: Re: [ovs-discuss] OVS (tag: 2.16.2) Build Fails With DPDK 21.11
> 
> On Fri, Jan 14, 2022 at 2:47 AM Ali Shah, Syed Faraz
>  wrote:
> > DPDK 21.11 fails to compile Open vSwitch (tag: 2.16.2) but work well
> > with DPDK 21.08, the latest tag version of OVS has some issues. It
> > would be appreciated if you look at the following complain. Thanks
> > much,
> 
> This lacks details.
> 
> I would expect that OVS 2.16 does *not* compile against 21.08 (because of API
> changes in DPDK).
> 
> 2.16 is supposed to compile against 20.11 and can (I must admit I did not 
> check)
> be linked against DPDK compiled as a dso (versions 21.02,
> 21.05 and 21.08).
> It is expected that OVS 2.16 fails to link against DPDK 21.11, since
> DPDK_21 ABI has not been preserved.
> 
> 
> --
> David Marchand
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS (tag: 2.16.2) Build Fails With DPDK 21.11

2022-01-18 Thread Stokes, Ian
> Hi Ian
> 
> Thanks much for feedback and I agree,
> 
>  https://docs.openvswitch.org/en/latest/faq/releases/ claims that 2.16.x can 
> be
> build with DPDK 20.11.1 successfully but documentation needs to be updated
> because I mentioned We are able to build 2.16.2 with 21.08.

I understand but please see the section below the supported releases in the 
same document.

Q: Are all the DPDK releases that OVS versions work with maintained?

OVS releases only supports and are validated against DPDK LTS releases, as 
21.08 is not a LTS release the OVS community officially doesn't validate and 
support use against it, that’s not to say it might not work, but it has not 
been validated by the OVS community and we recommend sticking to DPDK LTS 
releases to benefits from LTS support (bug fixes etc). So in this case no need 
to update the documentation as it's inline with community policy.

Thanks
Ian
> 
> Anyways, We will wait for 2.17... thanks much to David and You
> 
> Kind Regards,
> Syed Faraz Ali Shah,
> System Software Validation Engineer
> 
> -Original Message-
> From: Stokes, Ian 
> Sent: Tuesday, January 18, 2022 4:08 PM
> To: Ali Shah, Syed Faraz ; David Marchand
> 
> Cc: b...@openvswitch.org; Puzikov, DmitriiX ;
> Gibson, Joel A ; Gherghe, Calin
> 
> Subject: RE: [ovs-discuss] OVS (tag: 2.16.2) Build Fails With DPDK 21.11
> 
> > Hi David,
> >
> > Thanks much for your email,
> >
> > We can successfully compile OVS 2.16.0 to 2.16.2 with DPDK 21.08 but
> > it fails with DPDK 21.11,
> Hi Syed,
> 
> As David stated below, OVS 2.16 is not expected to compile with DPDK 21.11.
> 
> Each OVS release is expected to compile against a particular DPDK Release.
> 
> These can be found in the following link
> 
> https://docs.openvswitch.org/en/latest/faq/releases/
> 
> Under the section: Q: What DPDK version does each Open vSwitch release work
> with?
> 
> If you wish to use DPDK 21.11 then I suggest you use OVS master or else the 
> OVS
> 2.17 branch (due for official release in February).
> 
> If you have a hard requirement to use OVS 2.16 then the officially validated 
> and
> supported DPDK release for that branch is 20.11.1.
> 
> Hope this helps.
> 
> Thanks
> Ian
> 
> 
> >
> > Documentation
> > (https://docs.openvswitch.org/en/latest/intro/install/dpdk/) or
> > (https://github.com/openvswitch/ovs/blob/master/Documentation/intro/in
> > stall
> > /dpdk.rstmentioned) also mentions that DPDK 21.11 is supported with
> > current / latest OVS version.
> >
> > OVS Git master branch is up to date but tag 2.16.2 is lacking new PR's
> > merged in master that claims to support DPDK 21.11 such as
> >
> https://github.com/openvswitch/ovs/commit/17346b3899d98730fc90f039a966
> > b107aeae30b5
> >
> > I hope above information is helpful...
> >
> > Kind Regards,
> > Syed Faraz Ali Shah,
> > System Software Validation Engineer
> >
> > -Original Message-
> > From: David Marchand 
> > Sent: Tuesday, January 18, 2022 3:04 PM
> > To: Ali Shah, Syed Faraz 
> > Cc: b...@openvswitch.org; Puzikov, DmitriiX
> > ; Gibson, Joel A
> > ; Gherghe, Calin ;
> > Stokes, Ian 
> > Subject: Re: [ovs-discuss] OVS (tag: 2.16.2) Build Fails With DPDK
> > 21.11
> >
> > On Fri, Jan 14, 2022 at 2:47 AM Ali Shah, Syed Faraz
> >  wrote:
> > > DPDK 21.11 fails to compile Open vSwitch (tag: 2.16.2) but work well
> > > with DPDK 21.08, the latest tag version of OVS has some issues. It
> > > would be appreciated if you look at the following complain. Thanks
> > > much,
> >
> > This lacks details.
> >
> > I would expect that OVS 2.16 does *not* compile against 21.08 (because
> > of API changes in DPDK).
> >
> > 2.16 is supposed to compile against 20.11 and can (I must admit I did
> > not check) be linked against DPDK compiled as a dso (versions 21.02,
> > 21.05 and 21.08).
> > It is expected that OVS 2.16 fails to link against DPDK 21.11, since
> > DPDK_21 ABI has not been preserved.
> >
> >
> > --
> > David Marchand
> >
> 
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss