Re: [ovs-discuss] [ovs-dpdk] Performance drops after 3-4 minutes

2023-10-17 Thread Алексей Кашавкин via discuss


> On 16 Oct 2023, at 14:48, Ilya Maximets  wrote:
> 
> On 10/6/23 20:10, Алексей Кашавкин via discuss wrote:
>> Hello!
>> 
>> I am using OVS with DPDK in OpenStack. This is RDO+TripleO deployment with 
>> the Train release. I am trying to measure the performance of the DPDK 
>> compute node. I have created two VMs [1], one as a DUT with DPDK and one as 
>> a traffic generator with SR-IOV [2]. Both of them are using Pktgen. 
>> 
>> What happens is the following: for the first 3-4 minutes I see 2.6Gbit [3] 
>> reception in DUT, after that the speed always drops to 400Mbit [4]. At the 
>> same time in the output of `pmd-rxq-show` command I always see one of the 
>> interfaces in the bond loaded [5], but it happens that after flapping of the 
>> active interface the speed in DUT increases up to 5Gbit and in the output of 
>> `pmd-rxq-show` command I start to see the load on two interfaces [6]. But at 
>> the same time after 3-4 minutes the speed drops to 700Mbit and I continue to 
>> see the same load on the two interfaces in the bond in the `pmd-rxq-show` 
>> command. In the logs I see nothing but flapping [7] of the interfaces in 
>> bond and the flapping has no effect on the speed drop after 3-4 minutes of 
>> test. After the speed drop from the DUT itself I run traffic towards the 
>> traffic generator [8] for a while and stop, then the speed on the DUT is 
>> restored to 2.6Gbit again with traffic going through one interface or 5Gbit 
>> with traffic going through two interfaces, but this again is only for 3-4 
>> minutes. If I do a test with a traffic generator with a 2.5 Gbit or 1 Gbit 
>> speed limit, the speed also drops to DUT after 4-5 minutes. I've put logging 
>> in debug for bond, dpdk, netdev_dpdk, dpif_netdev, but haven't seen anything 
>> that clarifies what's going on, and also it's not clear that sometimes after 
>> flapping the active interface traffic starts going through both interfaces 
>> in bond, but this happens rarely, not in every test.
> 
> Since rate is restored after you sending some traffic in the backward
> direction, I'd say you have MAC learning somewhere on the path and
> it is getting expired.  For example, if you use NORMAL action in one
> of the bridges, once the MAC is expired, the bridge will start flooding
> packets to all ports of the bridge, which is very slow.  You may look
> at datapath flow dump to confirm which actions are getting executed
> on your packets: ovs-appctl dpctl/dump-flows.
> 
> In general, you should always continuously send some traffic back
> for learned MAC addresses to not expire.  I'm not sure if Pktgen is
> doing that these days, but it wasn't a very robust piece of software
> in the past.

Yes, that is exactly what is happening. I noticed that in the bridge fdb table 
the mac DUT is being expiring and in the ip fabric the mac DUT entry is also 
being expiring. If you clear both of these tables, performance drops. The speed 
does not drop as long as one of the tables still has a mac address entry. 

Now, if performance drops, I check the FDB tables and if I really don't see mac 
in them, I send a single ping packet from the DUT VM with pktgen to the traffic 
generator side, after which mac is learned again and speed is restored.

Thank you, Ilya.

>> 
>> [4] The flapping of the interface through which traffic is going to the DUT 
>> VM is probably due to the fact that it is heavily loaded alone in the bond 
>> and there are no LACP PDU packets going to or from it. The log shows that it 
>> is down for 30 seconds because the LACP rate is set to slow mode.
> 
> Dropped LACP packets can cause bond flapping indeed.  The only way to
> fix that in older versions of OVS is to reduce the load.  With OVS 3.2
> you may try experimental 'rx-steering' configuration that was designed
> exactly for this scenario and should ensure that PDU packets are not
> dropped.
> 
> Also, balancing depends on packet hashes, so you need to send many
> different traffic flows in order to get consistent balancing.
> 
>> 
>> I have done DUT on different OS, with different versions of DPDK and Pktgen. 
>> But always the same thing happens, after 3-4 minutes the speed drops.
>> Only on the DPDK compute node I didn't change anything. The compute node has 
>> Intel E810 network card with 25Gbit ports and Intel Xeon Gold 6230R CPU. The 
>> PMD threads uses cores 11, 21, 63, 73 on numa 0 and 36, 44, 88, 96 on numa 1.
> 
> All in all, 2.6Gbps seems like a small number for the type of a
> system you have.  You might have some other configuration issues.

This figure is probably related to the tcp-packet size of 64 bytes. The traffic 
generator sends a frame of 64 bytes.

> 
>> 
>> In addition:
>> [9] ovs-vsctl show
>> [10] OVSDB dump
>> [11] pmd-stats-show
>> [12] bond info with ovs-appctl
>> 
>> For compute nodes, I use Rocky Linux 8.5, Open vSwitch 2.15.5, and DPDK 
>> 20.11.1.
> 
> FWIW, OVS 2.15 reached EOL ~1.5 years ago.
> 
> Best regards, Ilya Maximets.
> 
>> 
>> 
>> What 

Re: [ovs-discuss] [ovs-dpdk] Performance drops after 3-4 minutes

2023-10-16 Thread Ilya Maximets via discuss
On 10/6/23 20:10, Алексей Кашавкин via discuss wrote:
> Hello!
> 
> I am using OVS with DPDK in OpenStack. This is RDO+TripleO deployment with 
> the Train release. I am trying to measure the performance of the DPDK compute 
> node. I have created two VMs [1], one as a DUT with DPDK and one as a traffic 
> generator with SR-IOV [2]. Both of them are using Pktgen. 
> 
> What happens is the following: for the first 3-4 minutes I see 2.6Gbit [3] 
> reception in DUT, after that the speed always drops to 400Mbit [4]. At the 
> same time in the output of `pmd-rxq-show` command I always see one of the 
> interfaces in the bond loaded [5], but it happens that after flapping of the 
> active interface the speed in DUT increases up to 5Gbit and in the output of 
> `pmd-rxq-show` command I start to see the load on two interfaces [6]. But at 
> the same time after 3-4 minutes the speed drops to 700Mbit and I continue to 
> see the same load on the two interfaces in the bond in the `pmd-rxq-show` 
> command. In the logs I see nothing but flapping [7] of the interfaces in bond 
> and the flapping has no effect on the speed drop after 3-4 minutes of test. 
> After the speed drop from the DUT itself I run traffic towards the traffic 
> generator [8] for a while and stop, then the speed on the DUT is restored to 
> 2.6Gbit again with traffic going through one interface or 5Gbit with traffic
> going through two interfaces, but this again is only for 3-4 minutes. If I do 
> a test with a traffic generator with a 2.5 Gbit or 1 Gbit speed limit, the 
> speed also drops to DUT after 4-5 minutes. I've put logging in debug for 
> bond, dpdk, netdev_dpdk, dpif_netdev, but haven't seen anything that 
> clarifies what's going on, and also it's not clear that sometimes after 
> flapping the active interface traffic starts going through both interfaces in 
> bond, but this happens rarely, not in every test.

Since rate is restored after you sending some traffic in the backward
direction, I'd say you have MAC learning somewhere on the path and
it is getting expired.  For example, if you use NORMAL action in one
of the bridges, once the MAC is expired, the bridge will start flooding
packets to all ports of the bridge, which is very slow.  You may look
at datapath flow dump to confirm which actions are getting executed
on your packets: ovs-appctl dpctl/dump-flows.

In general, you should always continuously send some traffic back
for learned MAC addresses to not expire.  I'm not sure if Pktgen is
doing that these days, but it wasn't a very robust piece of software
in the past.

> 
> [4] The flapping of the interface through which traffic is going to the DUT 
> VM is probably due to the fact that it is heavily loaded alone in the bond 
> and there are no LACP PDU packets going to or from it. The log shows that it 
> is down for 30 seconds because the LACP rate is set to slow mode.

Dropped LACP packets can cause bond flapping indeed.  The only way to
fix that in older versions of OVS is to reduce the load.  With OVS 3.2
you may try experimental 'rx-steering' configuration that was designed
exactly for this scenario and should ensure that PDU packets are not
dropped.

Also, balancing depends on packet hashes, so you need to send many
different traffic flows in order to get consistent balancing.

> 
> I have done DUT on different OS, with different versions of DPDK and Pktgen. 
> But always the same thing happens, after 3-4 minutes the speed drops.
> Only on the DPDK compute node I didn't change anything. The compute node has 
> Intel E810 network card with 25Gbit ports and Intel Xeon Gold 6230R CPU. The 
> PMD threads uses cores 11, 21, 63, 73 on numa 0 and 36, 44, 88, 96 on numa 1.

All in all, 2.6Gbps seems like a small number for the type of a
system you have.  You might have some other configuration issues.

> 
> In addition:
> [9] ovs-vsctl show
> [10] OVSDB dump
> [11] pmd-stats-show
> [12] bond info with ovs-appctl
> 
> For compute nodes, I use Rocky Linux 8.5, Open vSwitch 2.15.5, and DPDK 
> 20.11.1.

FWIW, OVS 2.15 reached EOL ~1.5 years ago.

Best regards, Ilya Maximets.

> 
> 
> What could be the cause of this behavior? I don't understand where I should 
> look to find out exactly what is going on.
> 
> 
> 1. https://that.guru/blog/pktgen-between-two-openstack-guests 
> 
> 2. https://freeimage.host/i/J206p8Q 
> 3. https://freeimage.host/i/J20Po9p 
> 4. https://freeimage.host/i/J20PRPs 
> 5. https://pastebin.com/rpaggexZ 
> 6. https://pastebin.com/Zhm779vT 
> 7. https://pastebin.com/Vt5P35gc 
> 8. https://freeimage.host/i/J204SkB 
> 9. https://pastebin.com/rNJZeyPy 
> 10. https://pastebin.com/wEifvivH 

Re: [ovs-discuss] OvS-DPDK compilation fails on NVidia Bluefield-2

2023-09-21 Thread Levente Csikor via discuss
Thanks for all your insightful comments.
All make sense to me, and also allowed me to think on my next moves.

Thank you so much, Ilya.

If I progress in this regards, I will come back to you guys.

Cheers,
levi

On Thu, 2023-09-21 at 12:46 +0200, Ilya Maximets wrote:
> On 9/21/23 11:32, Levente Csikor wrote:
> > Thanks Ilya for the response.
> > 
> > It turned out that the latest DPDK I can manually compile on the
> > Bluefield-2 without errors is v20.08.
> > 
> > After looking at the table you pointed me, my best option was
> > OvS 2.14.9 with DPDK-v19.11.13.
> > 
> > Compilations went well without error, now I have the same problem
> > when
> > attaching the ports to OvS. In fact, even attaching the physical
> > port
> > fails.
> > 
> > The way how I initiate OvS is using my custom tailored script
> > https://github.com/cslev/nvidia-bluefield-ovs-scripts/blob/main/start_ovs.sh
> > From line 223.
> > 
> > 
> > After adding the physical port, ovs-vsctl show provides this:
> > ```
> >     Bridge ovs_dpdk_br0
> >     datapath_type: netdev
> >     Port dpdk0
> >     Interface dpdk0
> >     type: dpdk
> >     options: {dpdk-devargs=":03:00.0"}
> >     error: "Error attaching device ':03:00.0' to
> > DPDK"
> >     Port ovs_dpdk_br0
> >     Interface ovs_dpdk_br0
> >     type: internal
> > ```
> > 
> > vswitchd log:
> > ```
> > ...
> > 2023-09-21T09:29:48.626Z|00067|bridge|INFO|bridge ovs_dpdk_br0:
> > added
> > interface ovs_dpdk_br0 on port 65534
> > 2023-09-21T09:29:48.627Z|00068|bridge|INFO|bridge ovs_dpdk_br0:
> > using
> > datapath ID d62195e14c4e
> > 2023-09-21T09:29:48.627Z|00069|connmgr|INFO|ovs_dpdk_br0: added
> > service
> > controller "punix:/usr/local/var/run/openvswitch/ovs_dpdk_br0.mgmt"
> > 2023-09-21T09:29:48.718Z|00070|dpdk|ERR|EAL: Driver cannot attach
> > the
> > device (:03:00.0)
> > 2023-09-21T09:29:48.718Z|00071|dpdk|ERR|EAL: Failed to attach
> > device on
> > primary process
> > 2023-09-21T09:29:48.719Z|00072|netdev_dpdk|WARN|Error attaching
> > device
> > ':03:00.0' to DPDK
> > 2023-09-21T09:29:48.719Z|00073|netdev|WARN|dpdk0: could not set
> > configuration (Invalid argument)
> > 2023-09-21T09:29:48.719Z|00074|dpdk|ERR|Invalid port_id=32
> > 2023-09-21T09:29:57.230Z|00075|memory|INFO|27944 kB peak resident
> > set
> > size after 10.3 seconds
> > 2023-09-21T09:29:57.230Z|00076|memory|INFO|handlers:1 ports:1
> > revalidators:1 rules:5
> > ```
> > 
> > 
> > Is there any other log files I can look into for more informational
> > debug messages?
> 
> You may enable additional DPDK debug logs by setting --log-level
> in the dpdk-extra config.  However, your DPDK version is older than
> a hardware you're using.  Meaning, the mlx driver in DPDK 19.11
> likely just doesn't recognize the hardware and will not be able to
> use it.  You need to figure out how to build newer versions of DPDK.
> Also, representor syntax changed a few times in the past, so your
> script may not work with older versions of DPDK.
> 
> If you can't build DPDK on the board itself, cross-compiling may be
> an option:
>   https://doc.dpdk.org/guides/platform/bluefield.html
> 
> Also OVS 2.14 is likely a poor choice for hardware offloading.
> I would not recommend anything below 2.17.  3.1 would be a better.
> 
> > 
> > 
> > Cheers,
> > levi
> > 
> > 
> > On Wed, 2023-09-20 at 17:20 +0200, Ilya Maximets wrote:
> > > On 9/20/23 06:39, Levente Csikor via discuss wrote:
> > > > Hi All,
> > > > 
> > > > I have a long lasting problem I have been trying to resolve for
> > > > quite
> > > > some time. I am playing around with an NVidia SmartNIC
> > > > (Bluefield-
> > > > 2),
> > > > which has OvS installed by default. It works well with the
> > > > kernel
> > > > driver, and even TC hardware offloading is working.
> > > > 
> > > > I want to experiment with DPDK, though. 
> > > > DPDK is also installed by default on the Bluefield-2.
> > > > The details of the versions are as follows:
> > > > 
> > > > OvS 2.17.7
> > > > DPDK 22.11.1.4.2
> > > > 
> > > > Following the "NVidia tutorials", I manage to add the physical
> > > > port
> > > > as
> > > > a netdev device to an OVS-DPDK bridge, however, adding the
> > > > virtual
> > > > function fails.
> > > > 
> > > > More details about the commands and problem are here:  
> > > > https://forums.developer.nvidia.com/t/error-with-configuring-ovs-dpdk-on-bluefiled-2/256030/4
> > > > 
> > > > 
> > > > Anyway, as a last resort, I thought I give a try to install OvS
> > > > and
> > > > DPDK from scratch following
> > > > https://docs.openvswitch.org/en/latest/intro/install/dpdk/
> > > > 
> > > > I used the same version for OvS and DPDK; the latter was anyway
> > > > the
> > > > one
> > > > recommended by the OvS documentation.
> > > > 
> > > > During the `make` process, I encounter several errors I cannot
> > > > really
> > > > resolve
> > > > ```
> > > > In file included from lib/dp-packet.h:29,
> 

Re: [ovs-discuss] OvS-DPDK compilation fails on NVidia Bluefield-2

2023-09-21 Thread Ilya Maximets via discuss
On 9/21/23 11:32, Levente Csikor wrote:
> Thanks Ilya for the response.
> 
> It turned out that the latest DPDK I can manually compile on the
> Bluefield-2 without errors is v20.08.
> 
> After looking at the table you pointed me, my best option was
> OvS 2.14.9 with DPDK-v19.11.13.
> 
> Compilations went well without error, now I have the same problem when
> attaching the ports to OvS. In fact, even attaching the physical port
> fails.
> 
> The way how I initiate OvS is using my custom tailored script
> https://github.com/cslev/nvidia-bluefield-ovs-scripts/blob/main/start_ovs.sh
> From line 223.
> 
> 
> After adding the physical port, ovs-vsctl show provides this:
> ```
> Bridge ovs_dpdk_br0
> datapath_type: netdev
> Port dpdk0
> Interface dpdk0
> type: dpdk
> options: {dpdk-devargs=":03:00.0"}
> error: "Error attaching device ':03:00.0' to DPDK"
> Port ovs_dpdk_br0
> Interface ovs_dpdk_br0
> type: internal
> ```
> 
> vswitchd log:
> ```
> ...
> 2023-09-21T09:29:48.626Z|00067|bridge|INFO|bridge ovs_dpdk_br0: added
> interface ovs_dpdk_br0 on port 65534
> 2023-09-21T09:29:48.627Z|00068|bridge|INFO|bridge ovs_dpdk_br0: using
> datapath ID d62195e14c4e
> 2023-09-21T09:29:48.627Z|00069|connmgr|INFO|ovs_dpdk_br0: added service
> controller "punix:/usr/local/var/run/openvswitch/ovs_dpdk_br0.mgmt"
> 2023-09-21T09:29:48.718Z|00070|dpdk|ERR|EAL: Driver cannot attach the
> device (:03:00.0)
> 2023-09-21T09:29:48.718Z|00071|dpdk|ERR|EAL: Failed to attach device on
> primary process
> 2023-09-21T09:29:48.719Z|00072|netdev_dpdk|WARN|Error attaching device
> ':03:00.0' to DPDK
> 2023-09-21T09:29:48.719Z|00073|netdev|WARN|dpdk0: could not set
> configuration (Invalid argument)
> 2023-09-21T09:29:48.719Z|00074|dpdk|ERR|Invalid port_id=32
> 2023-09-21T09:29:57.230Z|00075|memory|INFO|27944 kB peak resident set
> size after 10.3 seconds
> 2023-09-21T09:29:57.230Z|00076|memory|INFO|handlers:1 ports:1
> revalidators:1 rules:5
> ```
> 
> 
> Is there any other log files I can look into for more informational
> debug messages?

You may enable additional DPDK debug logs by setting --log-level
in the dpdk-extra config.  However, your DPDK version is older than
a hardware you're using.  Meaning, the mlx driver in DPDK 19.11
likely just doesn't recognize the hardware and will not be able to
use it.  You need to figure out how to build newer versions of DPDK.
Also, representor syntax changed a few times in the past, so your
script may not work with older versions of DPDK.

If you can't build DPDK on the board itself, cross-compiling may be
an option:
  https://doc.dpdk.org/guides/platform/bluefield.html

Also OVS 2.14 is likely a poor choice for hardware offloading.
I would not recommend anything below 2.17.  3.1 would be a better.

> 
> 
> Cheers,
> levi
> 
> 
> On Wed, 2023-09-20 at 17:20 +0200, Ilya Maximets wrote:
>> On 9/20/23 06:39, Levente Csikor via discuss wrote:
>>> Hi All,
>>>
>>> I have a long lasting problem I have been trying to resolve for
>>> quite
>>> some time. I am playing around with an NVidia SmartNIC (Bluefield-
>>> 2),
>>> which has OvS installed by default. It works well with the kernel
>>> driver, and even TC hardware offloading is working.
>>>
>>> I want to experiment with DPDK, though. 
>>> DPDK is also installed by default on the Bluefield-2.
>>> The details of the versions are as follows:
>>>
>>> OvS 2.17.7
>>> DPDK 22.11.1.4.2
>>>
>>> Following the "NVidia tutorials", I manage to add the physical port
>>> as
>>> a netdev device to an OVS-DPDK bridge, however, adding the virtual
>>> function fails.
>>>
>>> More details about the commands and problem are here:  
>>> https://forums.developer.nvidia.com/t/error-with-configuring-ovs-dpdk-on-bluefiled-2/256030/4
>>>
>>>
>>> Anyway, as a last resort, I thought I give a try to install OvS and
>>> DPDK from scratch following
>>> https://docs.openvswitch.org/en/latest/intro/install/dpdk/
>>>
>>> I used the same version for OvS and DPDK; the latter was anyway the
>>> one
>>> recommended by the OvS documentation.
>>>
>>> During the `make` process, I encounter several errors I cannot
>>> really
>>> resolve
>>> ```
>>> In file included from lib/dp-packet.h:29,
>>>  from lib/bfd.c:28:
>>> lib/netdev-dpdk.h:95:12: warning: ‘struct rte_flow_tunnel’ declared
>>> inside parameter list will not be visible outside of this
>>> definition or
>>> declaration
>>>    95 | struct rte_flow_tunnel *tunnel OVS_UNUSED,
>>>   |    ^~~
>>> lib/netdev-dpdk.h:106:42: warning: ‘struct rte_flow_tunnel’
>>> declared
>>> inside parameter list will not be visible outside of this
>>> definition or
>>> declaration
>>>   106 |   struct rte_flow_tunnel
>>> *tunnel OVS_UNUSED,
>>>   |  ^~~
>>> lib/netdev-dpdk.h:119:12: warning: ‘struct 

Re: [ovs-discuss] OvS-DPDK compilation fails on NVidia Bluefield-2

2023-09-21 Thread Levente Csikor via discuss
Thanks Ilya for the response.

It turned out that the latest DPDK I can manually compile on the
Bluefield-2 without errors is v20.08.

After looking at the table you pointed me, my best option was
OvS 2.14.9 with DPDK-v19.11.13.

Compilations went well without error, now I have the same problem when
attaching the ports to OvS. In fact, even attaching the physical port
fails.

The way how I initiate OvS is using my custom tailored script
https://github.com/cslev/nvidia-bluefield-ovs-scripts/blob/main/start_ovs.sh
From line 223.


After adding the physical port, ovs-vsctl show provides this:
```
Bridge ovs_dpdk_br0
datapath_type: netdev
Port dpdk0
Interface dpdk0
type: dpdk
options: {dpdk-devargs=":03:00.0"}
error: "Error attaching device ':03:00.0' to DPDK"
Port ovs_dpdk_br0
Interface ovs_dpdk_br0
type: internal
```

vswitchd log:
```
...
2023-09-21T09:29:48.626Z|00067|bridge|INFO|bridge ovs_dpdk_br0: added
interface ovs_dpdk_br0 on port 65534
2023-09-21T09:29:48.627Z|00068|bridge|INFO|bridge ovs_dpdk_br0: using
datapath ID d62195e14c4e
2023-09-21T09:29:48.627Z|00069|connmgr|INFO|ovs_dpdk_br0: added service
controller "punix:/usr/local/var/run/openvswitch/ovs_dpdk_br0.mgmt"
2023-09-21T09:29:48.718Z|00070|dpdk|ERR|EAL: Driver cannot attach the
device (:03:00.0)
2023-09-21T09:29:48.718Z|00071|dpdk|ERR|EAL: Failed to attach device on
primary process
2023-09-21T09:29:48.719Z|00072|netdev_dpdk|WARN|Error attaching device
':03:00.0' to DPDK
2023-09-21T09:29:48.719Z|00073|netdev|WARN|dpdk0: could not set
configuration (Invalid argument)
2023-09-21T09:29:48.719Z|00074|dpdk|ERR|Invalid port_id=32
2023-09-21T09:29:57.230Z|00075|memory|INFO|27944 kB peak resident set
size after 10.3 seconds
2023-09-21T09:29:57.230Z|00076|memory|INFO|handlers:1 ports:1
revalidators:1 rules:5
```


Is there any other log files I can look into for more informational
debug messages?


Cheers,
levi


On Wed, 2023-09-20 at 17:20 +0200, Ilya Maximets wrote:
> On 9/20/23 06:39, Levente Csikor via discuss wrote:
> > Hi All,
> > 
> > I have a long lasting problem I have been trying to resolve for
> > quite
> > some time. I am playing around with an NVidia SmartNIC (Bluefield-
> > 2),
> > which has OvS installed by default. It works well with the kernel
> > driver, and even TC hardware offloading is working.
> > 
> > I want to experiment with DPDK, though. 
> > DPDK is also installed by default on the Bluefield-2.
> > The details of the versions are as follows:
> > 
> > OvS 2.17.7
> > DPDK 22.11.1.4.2
> > 
> > Following the "NVidia tutorials", I manage to add the physical port
> > as
> > a netdev device to an OVS-DPDK bridge, however, adding the virtual
> > function fails.
> > 
> > More details about the commands and problem are here:  
> > https://forums.developer.nvidia.com/t/error-with-configuring-ovs-dpdk-on-bluefiled-2/256030/4
> > 
> > 
> > Anyway, as a last resort, I thought I give a try to install OvS and
> > DPDK from scratch following
> > https://docs.openvswitch.org/en/latest/intro/install/dpdk/
> > 
> > I used the same version for OvS and DPDK; the latter was anyway the
> > one
> > recommended by the OvS documentation.
> > 
> > During the `make` process, I encounter several errors I cannot
> > really
> > resolve
> > ```
> > In file included from lib/dp-packet.h:29,
> >  from lib/bfd.c:28:
> > lib/netdev-dpdk.h:95:12: warning: ‘struct rte_flow_tunnel’ declared
> > inside parameter list will not be visible outside of this
> > definition or
> > declaration
> >    95 | struct rte_flow_tunnel *tunnel OVS_UNUSED,
> >   |    ^~~
> > lib/netdev-dpdk.h:106:42: warning: ‘struct rte_flow_tunnel’
> > declared
> > inside parameter list will not be visible outside of this
> > definition or
> > declaration
> >   106 |   struct rte_flow_tunnel
> > *tunnel OVS_UNUSED,
> >   |  ^~~
> > lib/netdev-dpdk.h:119:12: warning: ‘struct rte_flow_restore_info’
> > declared inside parameter list will not be visible outside of this
> > definition or declaration
> >   119 | struct rte_flow_restore_info *info OVS_UNUSED,
> >   |    ^
> > In file included from lib/bfd.c:28:
> > lib/dp-packet.h:61:40: error: ‘RTE_MBUF_F_RX_RSS_HASH’ undeclared
> > here
> > (not in a function)
> >    61 | DEF_OL_FLAG(DP_PACKET_OL_RSS_HASH,
> > RTE_MBUF_F_RX_RSS_HASH,
> > 0x1),
> >   |   
> > ^~
> > lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
> >    52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> > DPDK_DEF
> >   |    
> > ^~~~
> > lib/dp-packet.h:63:41: error: ‘RTE_MBUF_F_RX_FDIR_ID’ undeclared
> > here
> > (not in a function)

Re: [ovs-discuss] OvS-DPDK compilation fails on NVidia Bluefield-2

2023-09-20 Thread Ilya Maximets via discuss
On 9/20/23 06:39, Levente Csikor via discuss wrote:
> Hi All,
> 
> I have a long lasting problem I have been trying to resolve for quite
> some time. I am playing around with an NVidia SmartNIC (Bluefield-2),
> which has OvS installed by default. It works well with the kernel
> driver, and even TC hardware offloading is working.
> 
> I want to experiment with DPDK, though. 
> DPDK is also installed by default on the Bluefield-2.
> The details of the versions are as follows:
> 
> OvS 2.17.7
> DPDK 22.11.1.4.2
> 
> Following the "NVidia tutorials", I manage to add the physical port as
> a netdev device to an OVS-DPDK bridge, however, adding the virtual
> function fails.
> 
> More details about the commands and problem are here:  
> https://forums.developer.nvidia.com/t/error-with-configuring-ovs-dpdk-on-bluefiled-2/256030/4
> 
> 
> Anyway, as a last resort, I thought I give a try to install OvS and
> DPDK from scratch following
> https://docs.openvswitch.org/en/latest/intro/install/dpdk/
> 
> I used the same version for OvS and DPDK; the latter was anyway the one
> recommended by the OvS documentation.
> 
> During the `make` process, I encounter several errors I cannot really
> resolve
> ```
> In file included from lib/dp-packet.h:29,
>  from lib/bfd.c:28:
> lib/netdev-dpdk.h:95:12: warning: ‘struct rte_flow_tunnel’ declared
> inside parameter list will not be visible outside of this definition or
> declaration
>95 | struct rte_flow_tunnel *tunnel OVS_UNUSED,
>   |^~~
> lib/netdev-dpdk.h:106:42: warning: ‘struct rte_flow_tunnel’ declared
> inside parameter list will not be visible outside of this definition or
> declaration
>   106 |   struct rte_flow_tunnel
> *tunnel OVS_UNUSED,
>   |  ^~~
> lib/netdev-dpdk.h:119:12: warning: ‘struct rte_flow_restore_info’
> declared inside parameter list will not be visible outside of this
> definition or declaration
>   119 | struct rte_flow_restore_info *info OVS_UNUSED,
>   |^
> In file included from lib/bfd.c:28:
> lib/dp-packet.h:61:40: error: ‘RTE_MBUF_F_RX_RSS_HASH’ undeclared here
> (not in a function)
>61 | DEF_OL_FLAG(DP_PACKET_OL_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
> 0x1),
>   |^~
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> DPDK_DEF
>   |
> ^~~~
> lib/dp-packet.h:63:41: error: ‘RTE_MBUF_F_RX_FDIR_ID’ undeclared here
> (not in a function)
>63 | DEF_OL_FLAG(DP_PACKET_OL_FLOW_MARK, RTE_MBUF_F_RX_FDIR_ID,
> 0x2),
>   | ^
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> DPDK_DEF
>   |
> ^~~~
> lib/dp-packet.h:65:47: error: ‘RTE_MBUF_F_RX_L4_CKSUM_BAD’ undeclared
> here (not in a function)
>65 | DEF_OL_FLAG(DP_PACKET_OL_RX_L4_CKSUM_BAD,
> RTE_MBUF_F_RX_L4_CKSUM_BAD, 0x4),
>   |  
> ^~
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> DPDK_DEF
>   |
> ^~~~
> lib/dp-packet.h:67:47: error: ‘RTE_MBUF_F_RX_IP_CKSUM_BAD’ undeclared
> here (not in a function)
>67 | DEF_OL_FLAG(DP_PACKET_OL_RX_IP_CKSUM_BAD,
> RTE_MBUF_F_RX_IP_CKSUM_BAD, 0x8),
>   |  
> ^~
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> DPDK_DEF
>   |
> ^~~~
> lib/dp-packet.h:69:48: error: ‘RTE_MBUF_F_RX_L4_CKSUM_GOOD’ undeclared
> here (not in a function)
>69 | DEF_OL_FLAG(DP_PACKET_OL_RX_L4_CKSUM_GOOD,
> RTE_MBUF_F_RX_L4_CKSUM_GOOD,
>   |   
> ^~~
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) NAME =
> DPDK_DEF
>   |
> ^~~~
> lib/dp-packet.h:72:48: error: ‘RTE_MBUF_F_RX_IP_CKSUM_GOOD’ undeclared
> here (not in a function)
>72 | DEF_OL_FLAG(DP_PACKET_OL_RX_IP_CKSUM_GOOD,
> RTE_MBUF_F_RX_IP_CKSUM_GOOD,
>   |   
> ^~~
> lib/dp-packet.h:52:57: note: in definition of macro ‘DEF_OL_FLAG’
>52 | #define DEF_OL_FLAG(NAME, DPDK_DEF, GENERIC_DEF) 

Re: [ovs-discuss] ovs-dpdk hardware offload with Mellanox Connect-X6 DX card

2023-08-24 Thread Eelco Chaudron via discuss



On 24 Aug 2023, at 9:50, xiao k via discuss wrote:

> hi,
>  I tried run ovs-dpdk implement NAT connect tracking hardware offload,when i 
> use "ovs-appctl dpctl/dump-flows -m" show the flow table,the status of 
> offloaded field was "offloaded:partial",not full offloaded.Is there any way 
> to make the CT flow full offloaded?

To my understanding conntrack is not hardware offloadable with OVS-DPDK 
rte_flow.

//Eelco

> thank you for answer!
>
> AlmaLinux release 9.2 (Turquoise Kodkod)
> Linux nat-test 5.14.0-284.11.1.el9_2.x86_64
> ovs-vswitchd (Open vSwitch) 3.1.2
> DPDK 22.11.1
>
> driver: mlx5_core
> version: 23.07-0.5.0
> firmware-version: 22.38.1002 (MT_000359)
> bus-info: :02:00.1
>
> # lspci | grep Mell
> 02:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 
> Dx]
> 02:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 
> Dx]
>
> # ovs-vsctl list Open_vSwitch
> _uuid   : c920f611-5c90-4c8a-9f7a-5b17dbd076fa
> bridges : [6dab16e8-9b15-4933-9dd6-edd0c9d0dc09]
> cur_cfg : 11
> datapath_types  : [netdev, system]
> datapaths   : {}
> db_version  : "8.3.1"
> dpdk_initialized: true
> dpdk_version: "DPDK 22.11.1"
> external_ids: {hostname=localhost, 
> rundir="/usr/local/var/run/openvswitch", system-id=""}
> iface_types : [bareudp, dpdk, dpdkvhostuser, dpdkvhostuserclient, 
> erspan, geneve, gre, gtpu, internal, ip6erspan, ip6gre, lisp, patch, stt, 
> system, tap, vxlan]
> manager_options : []
> next_cfg: 11
> other_config: {dpdk-init="true", hw-offload="true"}
> ovs_version : "3.1.2"
> ssl : []
> statistics  : {}
> system_type : unknown
> system_version  : unknown
>
> # ethtool -k ens1f1 |grep hw-tc-offload
> hw-tc-offload: on
>
> # cat /sys/class/net/ens1f1/compat/devlink/mode
> switchdev
>
> # ovs-vsctl show
> c920f611-5c90-4c8a-9f7a-5b17dbd076fa
> Bridge br0
> fail_mode: secure
> datapath_type: netdev
> Port br0
> Interface br0
> type: internal
> Port pf1
> Interface pf1
> type: dpdk
> options: {dpdk-devargs=":02:00.1"}
> ovs_version: "3.1.2"
>
> # ovs-ofctl dump-flows br0
>  cookie=0x0, duration=184398.446s, table=0, n_packets=0, n_bytes=0, 
> priority=100,ip,nw_dst=99.99.99.0/24 actions=resubmit(,20)
>  cookie=0x2e, duration=184398.446s, table=0, n_packets=1979461876, 
> n_bytes=1979461876000, priority=100,ip,nw_src=172.31.21.160 
> actions=resubmit(,20)
>  cookie=0x0, duration=184398.446s, table=0, n_packets=135734, 
> n_bytes=15186074, priority=0 actions=resubmit(,105)
>  cookie=0x0, duration=184398.446s, table=20, n_packets=0, n_bytes=0, 
> priority=100,ip,nw_dst=99.99.99.0/24 
> actions=ct(commit,table=30,zone=283,nat(src))
>  cookie=0x2e, duration=184398.446s, table=20, n_packets=1979461876, 
> n_bytes=1979461876000, priority=100,ip,nw_src=172.31.21.160 
> actions=ct(commit,table=30,nat(src=99.99.99.0-99.99.99.255))
>  cookie=0x0, duration=184398.446s, table=30, n_packets=1979461844, 
> n_bytes=1979461844000, priority=0,ip 
> actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],move:NXM_OF_ETH_DST[]->NXM_OF_ETH_SRC[],dec_ttl,IN_PORT
>  cookie=0x0, duration=184398.446s, table=105, n_packets=135248, 
> n_bytes=15123293, priority=0 actions=NORMAL
>
> # ovs-appctl dpctl/dump-flows -m
> ufid:6e557ccd-a9dc-42e1-9ea6-f1c9b364aabf, 
> skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(pf1),packet_type(ns=0,id=0),eth(src=88:2a:5e:a9:30:bd/00:00:00:00:00:00,dst=b8:ce:f6:0a:13:29/00:00:00:00:00:00),eth_type(0x0800),ipv4(src=172.31.21.160,dst=9.0.0.10/192.0.0.0,proto=17/0,tos=0/0,ttl=63/0,frag=no),udp(src=2004/0,dst=80/0),
>  packets:186963, bytes:186963000, used:0.000s, offloaded:partial, dp:ovs, 
> actions:ct(commit,nat(src=99.99.99.0-99.99.99.255)),recirc(0x3), 
> dp-extra-info:miniflow_bits(4,2)

> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk crash

2023-07-14 Thread 张同剑
Hi,

 

Any update with this issue? 

This issue happens with OVS 3.0.4 and DPDK 21.11.2 too.

 

Best regards

 



smime.p7s
Description: S/MIME cryptographic signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk crash

2023-04-03 Thread Lazuardi Nasution via discuss
Hi,

Any update with this issue? This issue happens with OVS 3.0.1 and DPDK
21.11.3 too.

Best regards,


> Date: Mon, 23 Jan 2023 15:45:37 +0800 (CST)
> From: ?? <13813836...@163.com>
> To: b...@openvswitch.org
> Subject: [ovs-discuss] ovs-dpdk crash
> Message-ID: <6cb12966.2f9.185dd96f9a5.coremail.13813836...@163.com>
> Content-Type: text/plain; charset="gbk"
>
> We use ovs2.17.2 and dpdk 22.03. After configuring SNAT?we  encountered
> coredump problems.
> Please take the trouble to look at these problems?thank you?
>
>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20230123/fc24053c/attachment-0001.html
> >
> -- next part --
> An embedded and charset-unspecified text was scrubbed...
> Name: crash1.txt
> URL: <
> http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20230123/fc24053c/attachment-0002.txt
> >
> -- next part --
> An embedded and charset-unspecified text was scrubbed...
> Name: crash2.txt
> URL: <
> http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20230123/fc24053c/attachment-0003.txt
> >
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OvS DPDK takes too many hugepages

2023-02-16 Thread Ilya Maximets via discuss
On 2/16/23 00:36, Tobias Hofmann (tohofman) wrote:
> Thanks Ilya, I can confirm this solves my problem.

Thanks for the confirmation!

Best regards, Ilya Maximets.

> 
> Regards
> Tobias
> 
> *From: *Ilya Maximets 
> *Date: *Wednesday, 15. February 2023 at 14:56
> *To: *Tobias Hofmann (tohofman) , 
> ovs-discuss@openvswitch.org 
> *Cc: *i.maxim...@ovn.org 
> *Subject: *Re: [ovs-discuss] OvS DPDK takes too many hugepages
> 
> On 2/15/23 22:46, Tobias Hofmann (tohofman) via discuss wrote:
>> Hello everyone,
>> 
>>  
>> 
>> I’m enabling DPDK on a system and I don’t see any errors while doing so. 
>> However, after enabling DPDK, I can see that my system has way less free 
>> hugepages than expected. It should only be using 512 at that point but it’s 
>> using 1914. OvS is the only potential candidate that can be consuming 
>> hugepages at that time. I  tried to verify that it is indeed OvS that is 
>> allocating these hugepages by checking each process under 
>> /proc//smaps but surprisingly I did not find any process having 
>> hugepages under its process ID. I wonder if there is something failing 
>> internally in OvS that is messing up the hugepage allocation.
>> 
>> Here are a few details of the system I’m using:
>> 
>> OvS version: 2.17.3
>> DPDK version: 21.11.0
>> Kernel version: 4.18.0-372.9.1.el8.x86_64
>> 
>> HugePages_Total:    5120
>> HugePages_Free: 3206
>> HugePages_Rsvd:    0
>> HugePages_Surp:    0
>> Hugepagesize:   2048 kB
>> Hugetlb:    10485760 kB
>> 
>> I noticed that the OvS and DPDK versions are not precisely matching with the 
>> official support listed here: 
>> https://docs.openvswitch.org/en/latest/faq/releases/ 
>> <https://docs.openvswitch.org/en/latest/faq/releases/> 
>> <https://docs.openvswitch.org/en/latest/faq/releases/ 
>> <https://docs.openvswitch.org/en/latest/faq/releases/>>
>> 
>> Could that be a reason for this behavior?
>> 
>> I’ve attached the log files of the ovs-vswitchd.log of the time where DPDK 
>> gets enabled to this email.
> 
> Hi, Tobias.
> 
> According to the log, you have DPDK initialized with the following
> configuration:
> 
>   EAL ARGS: ovs-vswitchd --iova-mode=va --socket-mem 1024 --in-memory -l 0.
> 
> You have socket-mem set to 1024, that means that DPDK will *pre-allocate* 
> 1024MB
> on start up.  But you don't have socket-limit option, so the actual memory
> consumption is unlimited.  Hence, OVS can allocate as much memory as it wants.
> 
> OVS stopped configuring socket-limit equal to socket-mem starting from
> release 2.17.  You need to explicitly set the other_config:dpdk-socket-limit
> option in order to limit the amount of memory OVS is allowed to allocate.
> 
> Here is what release notes in the NEWS file are saying abut 2.17:
> 
>  * EAL argument --socket-limit no longer takes on the value of 
> --socket-mem
>    by default.  'other_config:dpdk-socket-limit' can be set equal to
>    the 'other_config:dpdk-socket-mem' to preserve the legacy memory
>    limiting behavior.
> 
> Best regards, Ilya Maximets.
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OvS DPDK takes too many hugepages

2023-02-15 Thread Tobias Hofmann (tohofman) via discuss
Thanks Ilya, I can confirm this solves my problem.

Regards
Tobias

From: Ilya Maximets 
Date: Wednesday, 15. February 2023 at 14:56
To: Tobias Hofmann (tohofman) , ovs-discuss@openvswitch.org 

Cc: i.maxim...@ovn.org 
Subject: Re: [ovs-discuss] OvS DPDK takes too many hugepages
On 2/15/23 22:46, Tobias Hofmann (tohofman) via discuss wrote:
> Hello everyone,
>
>
>
> I’m enabling DPDK on a system and I don’t see any errors while doing so. 
> However, after enabling DPDK, I can see that my system has way less free 
> hugepages than expected. It should only be using 512 at that point but it’s 
> using 1914. OvS is the only potential candidate that can be consuming 
> hugepages at that time. I  tried to verify that it is indeed OvS that is 
> allocating these hugepages by checking each process under 
> /proc//smaps but surprisingly I did not find any process having 
> hugepages under its process ID. I wonder if there is something failing 
> internally in OvS that is messing up the hugepage allocation.
>
> Here are a few details of the system I’m using:
>
> OvS version: 2.17.3
> DPDK version: 21.11.0
> Kernel version: 4.18.0-372.9.1.el8.x86_64
>
> HugePages_Total:5120
> HugePages_Free: 3206
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> Hugetlb:10485760 kB
>
> I noticed that the OvS and DPDK versions are not precisely matching with the 
> official support listed here: 
> https://docs.openvswitch.org/en/latest/faq/releases/ 
> <https://docs.openvswitch.org/en/latest/faq/releases/>
>
> Could that be a reason for this behavior?
>
> I’ve attached the log files of the ovs-vswitchd.log of the time where DPDK 
> gets enabled to this email.

Hi, Tobias.

According to the log, you have DPDK initialized with the following
configuration:

  EAL ARGS: ovs-vswitchd --iova-mode=va --socket-mem 1024 --in-memory -l 0.

You have socket-mem set to 1024, that means that DPDK will *pre-allocate* 1024MB
on start up.  But you don't have socket-limit option, so the actual memory
consumption is unlimited.  Hence, OVS can allocate as much memory as it wants.

OVS stopped configuring socket-limit equal to socket-mem starting from
release 2.17.  You need to explicitly set the other_config:dpdk-socket-limit
option in order to limit the amount of memory OVS is allowed to allocate.

Here is what release notes in the NEWS file are saying abut 2.17:

 * EAL argument --socket-limit no longer takes on the value of --socket-mem
   by default.  'other_config:dpdk-socket-limit' can be set equal to
   the 'other_config:dpdk-socket-mem' to preserve the legacy memory
   limiting behavior.

Best regards, Ilya Maximets.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OvS DPDK takes too many hugepages

2023-02-15 Thread Ilya Maximets via discuss
On 2/15/23 22:46, Tobias Hofmann (tohofman) via discuss wrote:
> Hello everyone,
> 
>  
> 
> I’m enabling DPDK on a system and I don’t see any errors while doing so. 
> However, after enabling DPDK, I can see that my system has way less free 
> hugepages than expected. It should only be using 512 at that point but it’s 
> using 1914. OvS is the only potential candidate that can be consuming 
> hugepages at that time. I  tried to verify that it is indeed OvS that is 
> allocating these hugepages by checking each process under 
> /proc//smaps but surprisingly I did not find any process having 
> hugepages under its process ID. I wonder if there is something failing 
> internally in OvS that is messing up the hugepage allocation.
> 
> Here are a few details of the system I’m using:
> 
> OvS version: 2.17.3
> DPDK version: 21.11.0
> Kernel version: 4.18.0-372.9.1.el8.x86_64
> 
> HugePages_Total:    5120
> HugePages_Free: 3206
> HugePages_Rsvd:    0
> HugePages_Surp:    0
> Hugepagesize:   2048 kB
> Hugetlb:    10485760 kB
> 
> I noticed that the OvS and DPDK versions are not precisely matching with the 
> official support listed here: 
> https://docs.openvswitch.org/en/latest/faq/releases/ 
> 
> 
> Could that be a reason for this behavior?
> 
> I’ve attached the log files of the ovs-vswitchd.log of the time where DPDK 
> gets enabled to this email.

Hi, Tobias.

According to the log, you have DPDK initialized with the following
configuration:

  EAL ARGS: ovs-vswitchd --iova-mode=va --socket-mem 1024 --in-memory -l 0.

You have socket-mem set to 1024, that means that DPDK will *pre-allocate* 1024MB
on start up.  But you don't have socket-limit option, so the actual memory
consumption is unlimited.  Hence, OVS can allocate as much memory as it wants.

OVS stopped configuring socket-limit equal to socket-mem starting from
release 2.17.  You need to explicitly set the other_config:dpdk-socket-limit
option in order to limit the amount of memory OVS is allowed to allocate.

Here is what release notes in the NEWS file are saying abut 2.17:

 * EAL argument --socket-limit no longer takes on the value of --socket-mem
   by default.  'other_config:dpdk-socket-limit' can be set equal to
   the 'other_config:dpdk-socket-mem' to preserve the legacy memory
   limiting behavior.

Best regards, Ilya Maximets.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK in Virtual Machine (KVM)

2022-11-20 Thread Lazuardi Nasution via discuss
Hi,

It seem that you can attach SR-IOV VF of NIC to the VM. You mau use
suitable PMD of NIV for that purpose.

Best regards.

Date: Fri, 18 Nov 2022 08:51:51 +
> From: Rohan Bose 
> To: "ovs-discuss@openvswitch.org" 
> Subject: [ovs-discuss] OVS-DPDK in Virtual Machine (KVM)
> Message-ID: <81078bbb1ac24a0193483afa8e499...@mailbox.tu-dresden.de>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello all,
>
>
> I need to run OVS-DPDK version inside several virtual machines for testing
> out some scenarios. Is it possible to install OVS-DPDK inside the VM and
> configure one ore more of its interfaces as dpdk ports? If yes what type of
> interfaces do I need to attach to the VM and which pmd drivers can I use
> for that?
>
>
> Thanks and Regards,
>
> Rohan
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20221118/d33c00d0/attachment-0001.html
> >
>
> --
>
> Subject: Digest Footer
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
> --
>
> End of discuss Digest, Vol 161, Issue 15
> 
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk conntrack assert panic

2022-10-25 Thread Plato, Michael via discuss
Hi,
it looks like I ran into the same bug 
(https://mail.openvswitch.org/pipermail/ovs-discuss/2022-September/052065.html).
 Did you find a solution for the problem?

Best regards

Michael

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK KNI support

2022-04-04 Thread David Marchand
On Mon, Apr 4, 2022 at 10:06 AM Rahul Shah via discuss
 wrote:
>
> I am testing the OVS-DPDK with KNI port type but have been getting the ‘Not 
> supported’ error. I checked my OVS iface_types and KNI is not listed in the 
> port type.
>
> I am using the latest OVS and DPDK 21.11. The DPDK is built with KNI support 
> and the KNI example in the app runs successfully but when I add a port in the 
> OVS it doesn’t.
>
> ovs-vsctl add-port br0 vEth0 -- set Interface vEth0 type=dpdkkni 
> ofport_request=2
>
> Is the KNI support present in the latest OVS? Or do I need to patch anything 
> in OVS?

I can't tell if it works (I never really used kni, and kni is a legacy
thing even in dpdk).
But from a configuration pov, you should add a 'dpdk' type port and
pass the right devargs.
Something like:

# ovs-vsctl add-port br0 vEth0 -- set Interface vEth0 type=dpdk
ofport_request=2 -- set Interface vEth0 options:dpdk-devargs=net_kni0


-- 
David Marchand

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk and userspace tso

2021-11-06 Thread Satish Patel
I believe I had same issue and patch is available in 2.16.x version of OVS. TSO 
TX offload not supported in OVS. 

I’m not expert so let someone else chime in. 

Sent from my iPhone

> On Nov 5, 2021, at 2:32 PM, Thilak Raj Surendra Babu 
>  wrote:
> 
> 
> Hello Folks,
> I have a host with a DPDK physical interface attached to br0 and access to 
> this host is through an IP address on the bridge interface. 
> On enabling userspace_tso with the intention of bringing up VM’s with 
> vhost_user.
> Strangely I noticed that after ssh to this host from an external host no 
> packets are being transmitted out of this interface.
> 
> I noticed that flows were intact and counters for the flows were not 
> increasing, and it looked more like the packets were not transmitted out of 
> the interface when tso is enabled.
> Debugging with gdb(rte_eth_tx_burst returned 0) confirmed this theory and 
> ovs_tx_failure_drops was also increasing.
> 
> Enabling CONFIG_RTE_LIBRTE_IXGBE_DEBUG_RX,  
> CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX,CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE,
> CONFIG_RTE_LIBRTE_ETHDEV_DEBUG in the DPDK build showed these logs being 
> emitted,
> 
> 2021-11-05T13:34:43.794Z|02813|netdev_dpdk|WARN|eth2: Output batch contains 
> invalid packets. Only 3/8 are valid: Invalid argument
> 2021-11-05T17:39:21.129Z|15064|netdev_dpdk|WARN|eth2: Output batch contains 
> invalid packets. Only 0/5 are valid: Invalid argument
> 
> I think this suggests that some of the offload flags were not set properly.
> 
> Though by enabling CONFIG_RTE_LIBRTE_ETHDEV_DEBUG tx is not hung as the 
> packet with improper offload flags is dropped.
> However, I am curious to know if this has been seen by other folks and fixed 
> in later versions.
> 
> I am running the below version.
> ovs-vswitchd (Open vSwitch) 2.14.2
> DPDK 19.11.9
> 
> Please let me know if you need additional details.
>  
> Thanks
> Thilak Raj S
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk performance

2021-11-01 Thread Satish Patel
Thank you for reply kevin,

I do have a multi queue configuration, as you can see following. Let
me tell you what tools I am using for traffic generation. We run VoIP
services (pretty much telco style) so we need low latency for the
network to provide quality audio experience for customers. We
developed an in-house load-testing tool which emulates voip calls and
sends audio traffic and returns stats like drop/jitters etc, In short
audio RTP protocol is UDP based 150 bytes packets and they hit pretty
hard to the CPU. when i am running same loadtest on SRIOV vm i am
seeing very super and neat audio quality without any single packet
drop and I can load 20k people on a single 8 vCPU core VM (that is my
baseline).  similar test when i am running on DPDK based Host machine
running vm and seeing very high latency and choppy audio quality.

When my PMD hits 70% and above the processing cycle then i start
seeing packet drops not during pinging VM and audio quality degraded
that means somewhere i am hitting limit.

Notes: my VM isn't running any DPDK aware application so do you think
my VM is bottleneck here? I heard that when you are using an OVS-DPDK
based compute then you need your guest VM also aware of dpdk.

[root@vm0 ~]# ethtool -i eth0
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: :00:03.0
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

[root@vm0~]# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 8
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 8

On Mon, Nov 1, 2021 at 1:57 PM Kevin Traynor  wrote:
>
> On 30/10/2021 06:07, Satish Patel wrote:
> > Folks,
> >
> > I have configured ovs-dpdk to replace sriov deployment for bonding
> > support. everything good but somehow as soon as i start hitting
> > 200kpps rate i start seeing packet drop.
> >
> > I have configured CPU isolation as per documentation to assign a
> > dedicated pmd thread. I have assigned 8 dedicated PMD threads but
> > still performance is very poor.
> >
> > I created an 8vCPU vm on openstack using dpdk and running some
> > workload using an in-house application, during the 200kpps packet rate
> > I noticed my all PMD cpu showing high CPU processing cycles.
> >
>
> Not sure what you class as 'high CPU processing cycles', but if you mean
> at the point of hitting a CPU bottleneck and the stats below are taken
> with contrant traffic, none of them would indicate that is the case.
>
> >   $ ovs-vswitchd -V
> > ovs-vswitchd (Open vSwitch) 2.13.3
> > DPDK 19.11.7
> >
> > In the following output what does these queue-id:0 to 8 and why only
> > the first queue is in use but not others, they are always zero. What
> > does this mean?
> >
>
> It means the application is only sending packets to OVS on a single
> queue. It may be that the application does not do multi-queue, or
> because the application sends on the same queue number that it receives
> traffic on and it is only receiving traffic on one queue.
>
> > ovs-appctl dpif-netdev/pmd-rxq-show
> > pmd thread numa_id 0 core_id 2:
> >isolated : false
> >port: vhu1c3bf17a-01queue-id:  0 (enabled)   pmd usage:  0 %
> >port: vhu1c3bf17a-01queue-id:  1 (enabled)   pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  2 (disabled)  pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  3 (disabled)  pmd usage:  0 %
> > pmd thread numa_id 1 core_id 3:
> >isolated : false
> > pmd thread numa_id 0 core_id 22:
> >isolated : false
> >port: vhu1c3bf17a-01queue-id:  3 (enabled)   pmd usage:  0 %
> >port: vhu1c3bf17a-01queue-id:  6 (enabled)   pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  0 (enabled)   pmd usage: 54 %
> >port: vhu6b7daba9-1aqueue-id:  5 (disabled)  pmd usage:  0 %
> > pmd thread numa_id 1 core_id 23:
> >isolated : false
> >port: dpdk1 queue-id:  0 (enabled)   pmd usage:  3 %
> > pmd thread numa_id 0 core_id 26:
> >isolated : false
> >port: vhu1c3bf17a-01queue-id:  2 (enabled)   pmd usage:  0 %
> >port: vhu1c3bf17a-01queue-id:  7 (enabled)   pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  1 (disabled)  pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  4 (disabled)  pmd usage:  0 %
> > pmd thread numa_id 1 core_id 27:
> >isolated : false
> > pmd thread numa_id 0 core_id 46:
> >isolated : false
> >port: dpdk0 queue-id:  0 (enabled)   pmd usage:  27 %
> >port: vhu1c3bf17a-01queue-id:  4 (enabled)   pmd usage:  0 %
> >port: vhu1c3bf17a-01queue-id:  5 (enabled)   pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  6 (disabled)  pmd usage:  0 %
> >port: vhu6b7daba9-1aqueue-id:  7 (disabled)  pmd usage:  0 %
> > pmd thread numa_id 1 core_id 47:
> >isolated : false
> >
> >
> > $ ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl
> > 

Re: [ovs-discuss] ovs-dpdk performance

2021-11-01 Thread Kevin Traynor

On 30/10/2021 06:07, Satish Patel wrote:

Folks,

I have configured ovs-dpdk to replace sriov deployment for bonding
support. everything good but somehow as soon as i start hitting
200kpps rate i start seeing packet drop.

I have configured CPU isolation as per documentation to assign a
dedicated pmd thread. I have assigned 8 dedicated PMD threads but
still performance is very poor.

I created an 8vCPU vm on openstack using dpdk and running some
workload using an in-house application, during the 200kpps packet rate
I noticed my all PMD cpu showing high CPU processing cycles.



Not sure what you class as 'high CPU processing cycles', but if you mean 
at the point of hitting a CPU bottleneck and the stats below are taken 
with contrant traffic, none of them would indicate that is the case.



  $ ovs-vswitchd -V
ovs-vswitchd (Open vSwitch) 2.13.3
DPDK 19.11.7

In the following output what does these queue-id:0 to 8 and why only
the first queue is in use but not others, they are always zero. What
does this mean?



It means the application is only sending packets to OVS on a single 
queue. It may be that the application does not do multi-queue, or 
because the application sends on the same queue number that it receives 
traffic on and it is only receiving traffic on one queue.



ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 2:
   isolated : false
   port: vhu1c3bf17a-01queue-id:  0 (enabled)   pmd usage:  0 %
   port: vhu1c3bf17a-01queue-id:  1 (enabled)   pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  2 (disabled)  pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  3 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 3:
   isolated : false
pmd thread numa_id 0 core_id 22:
   isolated : false
   port: vhu1c3bf17a-01queue-id:  3 (enabled)   pmd usage:  0 %
   port: vhu1c3bf17a-01queue-id:  6 (enabled)   pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  0 (enabled)   pmd usage: 54 %
   port: vhu6b7daba9-1aqueue-id:  5 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 23:
   isolated : false
   port: dpdk1 queue-id:  0 (enabled)   pmd usage:  3 %
pmd thread numa_id 0 core_id 26:
   isolated : false
   port: vhu1c3bf17a-01queue-id:  2 (enabled)   pmd usage:  0 %
   port: vhu1c3bf17a-01queue-id:  7 (enabled)   pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  1 (disabled)  pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  4 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 27:
   isolated : false
pmd thread numa_id 0 core_id 46:
   isolated : false
   port: dpdk0 queue-id:  0 (enabled)   pmd usage:  27 %
   port: vhu1c3bf17a-01queue-id:  4 (enabled)   pmd usage:  0 %
   port: vhu1c3bf17a-01queue-id:  5 (enabled)   pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  6 (disabled)  pmd usage:  0 %
   port: vhu6b7daba9-1aqueue-id:  7 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 47:
   isolated : false


$ ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl
dpif-netdev/pmd-stats-show | grep "processing cycles:"
   processing cycles: 1697952 (0.01%)
   processing cycles: 12726856558 (74.96%)
   processing cycles: 4259431602 (19.40%)
   processing cycles: 512666 (0.00%)
   processing cycles: 6324848608 (37.81%)

Does processing cycles mean my PMD is under stress? but i am only
hitting 200kpps rate?


This is my dpdk0 and dpdk1 port statistics

sudo ovs-vsctl get Interface dpdk0 statistics
{flow_director_filter_add_errors=153605,
flow_director_filter_remove_errors=30829, mac_local_errors=0,
mac_remote_errors=0, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0,
ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0,
ovs_tx_qos_drops=0, rx_128_to_255_packets=64338613,
rx_1_to_64_packets=367, rx_256_to_511_packets=116298,
rx_512_to_1023_packets=31264, rx_65_to_127_packets=6990079,
rx_broadcast_packets=0, rx_bytes=12124930385, rx_crc_errors=0,
rx_dropped=0, rx_errors=12, rx_fcoe_crc_errors=0, rx_fcoe_dropped=12,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=367,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=128, rx_management_dropped=35741,
rx_management_packets=31264, rx_mbuf_allocation_errors=0,
rx_missed_errors=0, rx_oversize_errors=0, rx_packets=71512362,
rx_priority0_dropped=0, rx_priority0_mbuf_allocation_errors=1096,
rx_priority1_dropped=0, rx_priority1_mbuf_allocation_errors=0,
rx_priority2_dropped=0, rx_priority2_mbuf_allocation_errors=0,
rx_priority3_dropped=0, rx_priority3_mbuf_allocation_errors=0,
rx_priority4_dropped=0, rx_priority4_mbuf_allocation_errors=0,
rx_priority5_dropped=0, rx_priority5_mbuf_allocation_errors=0,
rx_priority6_dropped=0, rx_priority6_mbuf_allocation_errors=0,
rx_priority7_dropped=0, rx_priority7_mbuf_allocation_errors=0,
rx_undersize_errors=6990079, tx_128_to_255_packets=64273778,
tx_1_to_64_packets=128, tx_256_to_511_packets=43670294,
tx_512_to_1023_packets=153605, 

Re: [ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface

2021-03-22 Thread chengt...@qq.com
> It is not a user option for any NIC on the OVS-DPDK datapath afaik. The
> number of requested txqs is derived from the number of pmd threads. It
> is pmd threads +1, to give each of them and the main thread a dedicated
> txq. This is why you see 5 txq with 4 pmds.

For dpdkvhostuser port, does it mean that the VM can't receive packets
from more than N queues? where N=pmd_num +1.
If this is the case, VM rx performance could be bad because the VM can
use max of pmd_num +1 cores to receive packets.



chengt...@qq.com
 
From: Kevin Traynor
Date: 2021-03-09 02:54
To: George Diamantopoulos; ovs-discuss
Subject: Re: [ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface
On 07/03/2021 03:57, George Diamantopoulos wrote:
> Hello all,
> 
> It appears that setting the n_txq option has no effect for dpdk Interfaces,
> e.g.: "ovs-vsctl set Interface dpdk-eno1 options:n_txq=2".
> 
> n_txq appears to be hardcoded to "5" for my driver (BNX2X PMD), for some
> reason.
> 
 
It is not a user option for any NIC on the OVS-DPDK datapath afaik. The
number of requested txqs is derived from the number of pmd threads. It
is pmd threads +1, to give each of them and the main thread a dedicated
txq. This is why you see 5 txq with 4 pmds.
 
> An additional problem is, the driver won't allow setting n_rxq to a lower
> value than n_txq, and with 5 being hardcoded for txq, it means I can only
> bring the interface up with 5 rxq as well. For 2 ports, that makes 10 PMD
> threads, and I don't want/need to dedicate 10 cores to PMD...
> 
 
This rxq part seems a DPDK PMD driver limitation for this NIC, but it is
not related to num of PMD threads. Num of RxQ and PMD threads are
independent from each other.
 
> I have tried running DPDK's testpmd with this driver, and it successfully
> starts with 1 rxq + 1 txq, so I believe the issue lies with OVS-DPDK.
> 
 
It's more an integration issue. OVS-DPDK sets the txq based on num of
PMD threads, it is only a problem because this driver rejects that
number based on it's limitation which other NICs don't have. As
mentioned on irc, you could contact the driver maintainers about the
limitation.
 
> Indeed, while there is a call of smap_get_int() in lib/netdev-dpdk.c for
> n_rxq, there doesn't seem to be one for n_txq. I tried a quick hack to fix
> this by replicating dpdk_set_rxq_config() for txq, and calling it
> immediately after dpdk_set_rxq_config() is called in the code (it is called
> only once), but naturally that didn't work. Perhaps
> netdev_dpdk_set_tx_multiq() is involved here, but at that point my
> programming skills are beginning to fail me. Even more frustratingly, I
> can't seem to find where the dreaded number 5 is defined for transmit
> queues in the code...
> 
> Are there any known workarounds to this problem? Is it a bug? Thanks!
> 
 
I suggest set n_rxq >= (pmd threads +1), when adding the interface, this
should workaround the driver requirements you've mentioned. In best
case, RSS will actually use each Rxq, in worst case it will be polled by
a PMD thread and there will be no traffic, which won't use too many cycles.
 
> Best regards,
> George
> 
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Ovs-dpdk bonding error

2021-03-11 Thread KhacThuan Bk
Thanks for you reply.
This is my profile:

kernel: 3.10.0-1160.el7.x86_64
openvswitch: 2.12.0
dpdk: 18.11.8
NIC: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
driver: i40e
firmware-version: 1.67, 0x8fa8, 19.5.12

I’ll try new version.
How dpdk config change when server enable sriov with

 'intel_iommu=on iommu=pt'?

Vào Th 5, 11 thg 3, 2021 lúc 15:54 Finn, Emma  đã viết:

> Hi,
>
>
>
> What specific versions of OvS,DPDK, Kernel and i40e driver are you using?
>
> Have you tried moving to a new release? I tried with OvS 2.13.1 and DPDK
> 19.11.2 and saw no error
>
> when I created bond port listed below.
>
>
>
> Thanks,
>
> Emma Finn
>
>
>
> *From:* discuss  *On Behalf Of *KhacThuan
> Bk
> *Sent:* Tuesday 9 March 2021 16:57
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] Ovs-dpdk bonding error
>
>
>
> Dear All,
>
>
>
> I'm using ovs-dpdk with lacp bond_mode=balance-tcp.
>
> VT-x enabled via intel_iommu=on iommu=pt.
>
> Sometime, when ovs-dpdk start, it raised exception like below.
>
> I have tried remove 'intel_iommu=on iommu=pt', it returned success.
>
> We are ussing 'Intel Corporation Ethernet Controller X710 for 10GbE SFP+'
> with ovs 2.12 and dpdk18.11.
>
> I don't have any info about error code '|dpdk|ERR|eth_i40e_dev_init():
> Failed to do parameter init: -22'.
>
> Anyone has encounted this probem?
>
>
>
> [root@COMPUTE01 admin]# cat /proc/cmdline
>
> BOOT_IMAGE=/vmlinuz-3.10.0-1160.el7.x86_64 root=/dev/mapper/vg00-lv_root ro
> net.ifnames=1 crashkernel=2048M spectre_v2=retpoline rd.lvm.lv
> =vg00/lv_root
> rd.lvm.lv=vg00/lv_swap rd.lvm.lv=vg00/lv_usr rhgb quiet
> default_hugepagesz=1G hugepagesz=1G hugepages=228 intel_iommu=on iommu=pt
> isolcpus=2-35,38-71
>
>
>
>
>
>
>
> [root@COMPUTE01 admin]# dmesg | grep -e DMAR -e IOMMU
>
> [ 0.00] ACPI: DMAR 6fffd000 00250 (v01 DELL PE_SC3
> 0001 DELL 0001)
>
> [ 0.00] DMAR: IOMMU enabled
>
> [ 1.656143] DMAR: Hardware identity mapping for device :19:00.0
>
> [ 1.656145] DMAR: Hardware identity mapping for device :19:00.1
>
>
>
>
>
> [root@COMPUTE01 admin]# ovs-vsctl add-bond br-vlan bond-vlan em1 em2
> bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true
> other_config:bond-detect-mode=miimon other_config:lacp-time=fast
> other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 -- set
> Interface em1 type=dpdk options:dpdk-devargs=:19:00.0
> other_config:pci_address=:19:00.0 other_config:driver=igb_uio
> other_config:previous_driver=i40e -- set Interface em2 type=dpdk
> options:dpdk-devargs=:19:00.1 other_config:pci_address=:19:00.1
> other_config:driver=igb_uio other_config:previous_driver=i40e
>
>
>
> [root@COMPUTE01 admin]# ovs-vsctl show
>
> 295dd51d-db1d-463a-b8f9-865580d1f1b1
>
> Manager "ptcp:6640:127.0.0.1"
>
> is_connected: true
>
> Bridge br-vlan
>
> Controller "tcp:127.0.0.1:6633"
>
> is_connected: true
>
> fail_mode: secure
>
> datapath_type: netdev
>
> Port bond-vlan
>
> Interface "em1"
>
> type: dpdk
>
> options: {dpdk-devargs=":19:00.0"}
>
> error: "Error attaching device ':19:00.0' to DPDK"
>
> Interface "em2"
>
> type: dpdk
>
> options: {dpdk-devargs=":19:00.1"}
>
> error: "Error attaching device ':19:00.1' to DPDK"
>
>
>
> [root@COMPUTE01 admin]# cat /var/log/ovs-vswitchd.log
>
> 2021-03-08T10:34:01Z|00159|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.12.0
>
> 2021-03-08T10:34:04Z|00160|dpdk|INFO|EAL: PCI device :19:00.0 on NUMA
> socket 0
>
> 2021-03-08T10:34:04Z|00161|dpdk|INFO|EAL: probe driver: 8086:1572
> net_i40e
>
> 2021-03-08T10:34:04Z|00162|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x002676fc]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00163|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x0026770c]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00164|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x00267710]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00165|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x00267714]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00166|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x0026771c]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00167|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x00267724]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00168|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x0026774c]. original:
> 0x, new: 0x
>
> 2021-03-08T10:34:04Z|00169|dpdk|WARN|i40e_check_write_global_reg(): i40e
> device :19:00.0 changed global register [0x0026775c]. original:
> 0x, new: 0x
>
> 

Re: [ovs-discuss] Ovs-dpdk bonding error

2021-03-11 Thread Finn, Emma
Hi,

What specific versions of OvS,DPDK, Kernel and i40e driver are you using?
Have you tried moving to a new release? I tried with OvS 2.13.1 and DPDK 
19.11.2 and saw no error
when I created bond port listed below.

Thanks,
Emma Finn

From: discuss  On Behalf Of KhacThuan Bk
Sent: Tuesday 9 March 2021 16:57
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Ovs-dpdk bonding error

Dear All,



I'm using ovs-dpdk with lacp bond_mode=balance-tcp.

VT-x enabled via intel_iommu=on iommu=pt.

Sometime, when ovs-dpdk start, it raised exception like below.

I have tried remove 'intel_iommu=on iommu=pt', it returned success.

We are ussing 'Intel Corporation Ethernet Controller X710 for 10GbE SFP+'
with ovs 2.12 and dpdk18.11.

I don't have any info about error code '|dpdk|ERR|eth_i40e_dev_init():
Failed to do parameter init: -22'.

Anyone has encounted this probem?



[root@COMPUTE01 admin]# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1160.el7.x86_64 root=/dev/mapper/vg00-lv_root ro
net.ifnames=1 crashkernel=2048M spectre_v2=retpoline 
rd.lvm.lv=vg00/lv_root
rd.lvm.lv=vg00/lv_swap 
rd.lvm.lv=vg00/lv_usr rhgb quiet
default_hugepagesz=1G hugepagesz=1G hugepages=228 intel_iommu=on iommu=pt
isolcpus=2-35,38-71







[root@COMPUTE01 admin]# dmesg | grep -e DMAR -e IOMMU

[ 0.00] ACPI: DMAR 6fffd000 00250 (v01 DELL PE_SC3
0001 DELL 0001)

[ 0.00] DMAR: IOMMU enabled

[ 1.656143] DMAR: Hardware identity mapping for device :19:00.0

[ 1.656145] DMAR: Hardware identity mapping for device :19:00.1





[root@COMPUTE01 admin]# ovs-vsctl add-bond br-vlan bond-vlan em1 em2
bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true
other_config:bond-detect-mode=miimon other_config:lacp-time=fast
other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 -- set
Interface em1 type=dpdk options:dpdk-devargs=:19:00.0
other_config:pci_address=:19:00.0 other_config:driver=igb_uio
other_config:previous_driver=i40e -- set Interface em2 type=dpdk
options:dpdk-devargs=:19:00.1 other_config:pci_address=:19:00.1
other_config:driver=igb_uio other_config:previous_driver=i40e



[root@COMPUTE01 admin]# ovs-vsctl show

295dd51d-db1d-463a-b8f9-865580d1f1b1

Manager "ptcp:6640:127.0.0.1"

is_connected: true

Bridge br-vlan

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

datapath_type: netdev

Port bond-vlan

Interface "em1"

type: dpdk

options: {dpdk-devargs=":19:00.0"}

error: "Error attaching device ':19:00.0' to DPDK"

Interface "em2"

type: dpdk

options: {dpdk-devargs=":19:00.1"}

error: "Error attaching device ':19:00.1' to DPDK"



[root@COMPUTE01 admin]# cat /var/log/ovs-vswitchd.log

2021-03-08T10:34:01Z|00159|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.12.0

2021-03-08T10:34:04Z|00160|dpdk|INFO|EAL: PCI device :19:00.0 on NUMA
socket 0

2021-03-08T10:34:04Z|00161|dpdk|INFO|EAL: probe driver: 8086:1572
net_i40e

2021-03-08T10:34:04Z|00162|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x002676fc]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00163|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x0026770c]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00164|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x00267710]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00165|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x00267714]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00166|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x0026771c]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00167|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x00267724]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00168|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x0026774c]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00169|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x0026775c]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00170|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x00267760]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00171|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x00267764]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00172|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 changed global register [0x0026776c]. original:
0x, new: 0x

2021-03-08T10:34:04Z|00173|dpdk|WARN|i40e_check_write_global_reg(): i40e
device :19:00.0 

Re: [ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface

2021-03-08 Thread Kevin Traynor
On 07/03/2021 03:57, George Diamantopoulos wrote:
> Hello all,
> 
> It appears that setting the n_txq option has no effect for dpdk Interfaces,
> e.g.: "ovs-vsctl set Interface dpdk-eno1 options:n_txq=2".
> 
> n_txq appears to be hardcoded to "5" for my driver (BNX2X PMD), for some
> reason.
> 

It is not a user option for any NIC on the OVS-DPDK datapath afaik. The
number of requested txqs is derived from the number of pmd threads. It
is pmd threads +1, to give each of them and the main thread a dedicated
txq. This is why you see 5 txq with 4 pmds.

> An additional problem is, the driver won't allow setting n_rxq to a lower
> value than n_txq, and with 5 being hardcoded for txq, it means I can only
> bring the interface up with 5 rxq as well. For 2 ports, that makes 10 PMD
> threads, and I don't want/need to dedicate 10 cores to PMD...
> 

This rxq part seems a DPDK PMD driver limitation for this NIC, but it is
not related to num of PMD threads. Num of RxQ and PMD threads are
independent from each other.

> I have tried running DPDK's testpmd with this driver, and it successfully
> starts with 1 rxq + 1 txq, so I believe the issue lies with OVS-DPDK.
> 

It's more an integration issue. OVS-DPDK sets the txq based on num of
PMD threads, it is only a problem because this driver rejects that
number based on it's limitation which other NICs don't have. As
mentioned on irc, you could contact the driver maintainers about the
limitation.

> Indeed, while there is a call of smap_get_int() in lib/netdev-dpdk.c for
> n_rxq, there doesn't seem to be one for n_txq. I tried a quick hack to fix
> this by replicating dpdk_set_rxq_config() for txq, and calling it
> immediately after dpdk_set_rxq_config() is called in the code (it is called
> only once), but naturally that didn't work. Perhaps
> netdev_dpdk_set_tx_multiq() is involved here, but at that point my
> programming skills are beginning to fail me. Even more frustratingly, I
> can't seem to find where the dreaded number 5 is defined for transmit
> queues in the code...
> 
> Are there any known workarounds to this problem? Is it a bug? Thanks!
> 

I suggest set n_rxq >= (pmd threads +1), when adding the interface, this
should workaround the driver requirements you've mentioned. In best
case, RSS will actually use each Rxq, in worst case it will be polled by
a PMD thread and there will be no traffic, which won't use too many cycles.

> Best regards,
> George
> 
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface

2021-03-07 Thread George Diamantopoulos
Hello again,

To better show the effects of the issue described in my original message,
here's a pastebin link with some information following a clean installation
of debian testing: https://pastebin.com/d8vpvRZr

As you can see, ovs-vsctl does set the n_txq=6 option, but appctl doesn't
register the value in "requested_tx_queues", which seems to be hardcoded to
5 tx queues.
Setting n_txq and n_rxq to low values, such as 2, is not included in the
pastebin above, but as stated in my original message the result is a
failure to bring up to dpdk interfaces (due to rxq=2 being lower than the
hardcoded txq=5, since n_txq is always ignored).

On Sun, 7 Mar 2021 at 05:57, George Diamantopoulos 
wrote:

> Hello all,
>
> It appears that setting the n_txq option has no effect for dpdk
> Interfaces, e.g.: "ovs-vsctl set Interface dpdk-eno1 options:n_txq=2".
>
> n_txq appears to be hardcoded to "5" for my driver (BNX2X PMD), for some
> reason.
>
> An additional problem is, the driver won't allow setting n_rxq to a lower
> value than n_txq, and with 5 being hardcoded for txq, it means I can only
> bring the interface up with 5 rxq as well. For 2 ports, that makes 10 PMD
> threads, and I don't want/need to dedicate 10 cores to PMD...
>
> I have tried running DPDK's testpmd with this driver, and it successfully
> starts with 1 rxq + 1 txq, so I believe the issue lies with OVS-DPDK.
>
> Indeed, while there is a call of smap_get_int() in lib/netdev-dpdk.c for
> n_rxq, there doesn't seem to be one for n_txq. I tried a quick hack to fix
> this by replicating dpdk_set_rxq_config() for txq, and calling it
> immediately after dpdk_set_rxq_config() is called in the code (it is called
> only once), but naturally that didn't work. Perhaps
> netdev_dpdk_set_tx_multiq() is involved here, but at that point my
> programming skills are beginning to fail me. Even more frustratingly, I
> can't seem to find where the dreaded number 5 is defined for transmit
> queues in the code...
>
> Are there any known workarounds to this problem? Is it a bug? Thanks!
>
> Best regards,
> George
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance with SELECT group

2019-11-26 Thread Gregory Rose


On 11/26/2019 7:41 AM, Rami Neiman wrote:


Hello,

I am using OVS DPDK 2.9.2 with TRex traffic generator to simply 
forward the received traffic back to the traffic generator (i.e. 
ingress0->egeress0, egress0->ingress0) over 2 port 10G NIC.


The OVS throughput with this setup matches the traffic generator (all 
packets sent by TG are received). And we are getting around 2.5Mpps of 
traffic forwarded fine (we can probably go even higher, so that’s not 
a limit).


Our next goal is to have the TG traffic also mirrored over additional 
2 10G ports to a monitoring device and we use SELECT group to achieve 
load balancing of mirrored traffic. When we add the group as follows:




Putting all that on a single NIC might be overwhelming the PCIE 
bandwidth.  Something to check.


- Greg

ovs-ofctl -O OpenFlow13 add-group br0 
group_id=5,type=select,bucket=output:mirror0,bucket=output:mirror1


ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
metadata=0,in_port=egress0,actions=group:5,output:ingress0"


ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
metadata=0,in_port=ingress0,actions=group:5,output:egress0"


mirror0 and mirror1 being our mirror ports. The mirroring works as 
expected, however the OVS throughput drops to less than 500 Kpps (as 
reported by the traffic generator).


The ingress0 and egress0 (i.e. ports that receive traffic) show 
packets being dropped in large numbers. Adding more pmd cores and 
distributing Rx queues among them has no effect. Changing the hash 
fields of the SELECT group has no effect either.


My question is: is there a way to give more cores/memory or otherwise 
influence the hash calculation and SELECT group action to make it more 
performant? less than 500Kpps seems like a very low number.


Just in case, here’s the output of the most important statistics commands:

ovs-vsctl --column statistics list interface egress0

statistics : {flow_director_filter_add_errors=0, 
flow_director_filter_remove_errors=0, mac_local_errors=17, 
mac_remote_errors=1, "rx_128_to_255_packets"=3936120, 
"rx_1_to_64_packets"=14561687, "rx_256_to_511_packets"=1624884, 
"rx_512_to_1023_packets"=2180436, "rx_65_to_127_packets"=21519189, 
rx_broadcast_packets=17, rx_bytes=23487692367, rx_crc_errors=0, 
rx_dropped=23759559, rx_errors=0, rx_fcoe_crc_errors=0, 
rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
rx_length_errors=0, rx_mac_short_packet_dropped=0, 
rx_management_dropped=0, rx_management_packets=0, 
rx_mbuf_allocation_errors=0, rx_missed_errors=23759559, 
rx_oversize_errors=0, rx_packets=39363905, 
"rx_priority0_dropped"=23759559, 
"rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
"rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
"rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
"rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
"rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
"rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
"rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
"rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
"tx_128_to_255_packets"=1549647, "tx_1_to_64_packets"=10995089, 
"tx_256_to_511_packets"=7309468, "tx_512_to_1023_packets"=739062, 
"tx_65_to_127_packets"=7837579, tx_broadcast_packets=6, 
tx_bytes=28481732482, tx_dropped=0, tx_errors=0, 
tx_management_packets=0, tx_multicast_packets=0, tx_packets=43936201}


ovs-vsctl --column statistics list interface ingress0

statistics : {flow_director_filter_add_errors=0, 
flow_director_filter_remove_errors=0, mac_local_errors=37, 
mac_remote_errors=1, "rx_128_to_255_packets"=2778420, 
"rx_1_to_64_packets"=18198197, "rx_256_to_511_packets"=13168041, 
"rx_512_to_1023_packets"=886524, "rx_65_to_127_packets"=14853438, 
rx_broadcast_packets=17, rx_bytes=28481734408, rx_crc_errors=0, 
rx_dropped=22718779, rx_errors=0, rx_fcoe_crc_errors=0, 
rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
rx_length_errors=0, rx_mac_short_packet_dropped=0, 
rx_management_dropped=0, rx_management_packets=0, 
rx_mbuf_allocation_errors=0, rx_missed_errors=22718779, 
rx_oversize_errors=0, rx_packets=43936225, 
"rx_priority0_dropped"=22718779, 
"rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
"rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
"rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
"rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
"rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
"rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
"rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
"rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
"tx_128_to_255_packets"=1793095, "tx_1_to_64_packets"=7027091, 
"tx_256_to_511_packets"=783763, 

Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-13 Thread Flavio Leitner
On Mon, 11 Nov 2019 14:45:13 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> to follow up on this: I have just upgraded DPDK to 18.11 and OVS to
> 2.11 and I don't see this issue anymore. Also, I don't observe any
> "ring error" messages although the MTU is still at 9216 and OvS only
> has 1Gb of memory. Do you have an idea which change in DPDK/OvS might
> have resolved it?

There are lots and lots of changes in OVS and DPDK between those
versions and 2.11 is considered stable, so I am glad that you could
update and that it works for you.

fbl
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-11 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

to follow up on this: I have just upgraded DPDK to 18.11 and OVS to 2.11 and I 
don't see this issue anymore. Also, I don't observe any "ring error" messages 
although the MTU is still at 9216 and OvS only has 1Gb of memory.
Do you have an idea which change in DPDK/OvS might have resolved it?

Thanks
Tobias

On 06.11.19, 14:44, "Tobias Hofmann (tohofman)"  wrote:

Hi Flavio,

the only error I saw in 'ovs-vsctl show' was related to the dpdk port. The 
other ports all came up fine.

Regarding the "ring error", I'm fine with having it, as long as DPDK is 
able to reserve the minimum amount of memory (which, after restarting OvS 
process is always the case).

Regards
Tobias

On 05.11.19, 21:07, "Flavio Leitner"  wrote:

On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-06 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

the only error I saw in 'ovs-vsctl show' was related to the dpdk port. The 
other ports all came up fine.

Regarding the "ring error", I'm fine with having it, as long as DPDK is able to 
reserve the minimum amount of memory (which, after restarting OvS process is 
always the case).

Regards
Tobias

On 05.11.19, 21:07, "Flavio Leitner"  wrote:

On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Flavio Leitner
On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

thanks for the insights! Unfortunately, I don't know about the pdump and its 
relation to the ring.

Can you please specify where I can see that the port is not ready yet? Is that 
these three lines:

2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged device 
(:08:0b.2)
2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching device 
':08:0b.2' to DPDK
2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set configuration 
(Invalid argument)

As far as I know, the ring allocation failure that you mentioned isn't 
necessarily a bad thing since it just indicates that DPDK reduces something 
internally (I can't remember what exactly it was) to support a high MTU with 
only 1GB of memory.

I'm wondering now if it might help to change the timing of when openvswitch is 
started after a system reboot to prevent this problem as it only occurs after 
reboot. Do you think that this approach might fix the problem?

Thanks for your help
Tobias

On 05.11.19, 14:08, "Flavio Leitner"  wrote:

On Mon, 4 Nov 2019 19:12:36 +
"Tobias Hofmann (tohofman)"  wrote:

> Hi Flavio,
> 
> thanks for reaching out.
> 
> The DPDK options used in OvS are:
> 
> other_config:pmd-cpu-mask=0x202
> other_config:dpdk-socket-mem=1024
> other_config:dpdk-init=true
> 
> 
> For the dpdk port, we set:
> 
> type=dpdk
> options:dpdk-devargs=:08:0b.2
> external_ids:unused-drv=i40evf 
> mtu_request=9216

Looks good to me, though the CPU has changed comparing to the log:
2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024 -c 0x0001

What I see from the logs is that OvS is trying to add a port, but the
port is not ready yet, so it continues with other things which
also consumes memory. Unfortunately by the time that the i40 port is
ready then there is no memory.

When you restart, the i40 is ready and the memory can be allocated.
However, the ring allocation fails due to lack of memory:

2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory

If you reduce the MTU, then the minimum amount of memory required for
the DPDK port reduces drastically, which explains why it works.

Also increasing the total memory to 2G helps because then the minimum
amount for 9216 MTU and the ring seems to be sufficient.

The ring seems to be related to pdump, is that the case?
I don't known of the top of my head.

In summary, looks like 1G is not enough for large MTU and pdump.
HTH,
fbl

> 
> 
> Please let me know if this is what you asked for.
> 
> Thanks
> Tobias
>   
> On 04.11.19, 15:50, "Flavio Leitner"  wrote:
> 
> 
> It would be nice if you share the DPDK options used in OvS.
> 
> On Sat, 2 Nov 2019 15:43:18 +
> "Tobias Hofmann \(tohofman\) via discuss"
>  wrote:
> 
> > Hello community,
> > 
> > My team and I observe a strange behavior on our system with the
> > creation of dpdk ports in OVS. We have a CentOS 7 system with
> > OpenvSwitch and only one single port of type ‘dpdk’ attached to
> > a bridge. The MTU size of the DPDK port is 9216 and the reserved
> > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
> > HugePage memory.
> > 
> > Setting everything up works fine, however after I reboot my
> > box, the dpdk port is in error state and I can observe this
> > line in the logs (full logs attached to the mail):
> > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
> > Invalid argument
> > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> > interface dpdk-p0 new configuration
> > 
> > I figured out that by restarting the openvswitch process, the
> > issue with the port is resolved and it is back in a working
> > state. However, as soon as I reboot the system a second time,
> > the port comes up in error state again. Now, we have also
> > observed a couple of other workarounds that I can’t really
> > explain why they help:
> > 
> >   *   When there is also a VM deployed on the system that is
> > using ports of type ‘dpdkvhostuserclient’, we never see any
> > issues like that. (MTU size of the VM ports is 9216 by the way)
> >   *   When we increase the HugePage memory for OVS to 2GB, we
> > also don’t see any issues.
> >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> > helps to prevent this issue.
> > 
> > Can anyone explain 

Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Flavio Leitner
On Mon, 4 Nov 2019 19:12:36 +
"Tobias Hofmann (tohofman)"  wrote:

> Hi Flavio,
> 
> thanks for reaching out.
> 
> The DPDK options used in OvS are:
> 
> other_config:pmd-cpu-mask=0x202
> other_config:dpdk-socket-mem=1024
> other_config:dpdk-init=true
> 
> 
> For the dpdk port, we set:
> 
> type=dpdk
> options:dpdk-devargs=:08:0b.2
> external_ids:unused-drv=i40evf 
> mtu_request=9216

Looks good to me, though the CPU has changed comparing to the log:
2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024 -c 0x0001

What I see from the logs is that OvS is trying to add a port, but the
port is not ready yet, so it continues with other things which
also consumes memory. Unfortunately by the time that the i40 port is
ready then there is no memory.

When you restart, the i40 is ready and the memory can be allocated.
However, the ring allocation fails due to lack of memory:

2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory

If you reduce the MTU, then the minimum amount of memory required for
the DPDK port reduces drastically, which explains why it works.

Also increasing the total memory to 2G helps because then the minimum
amount for 9216 MTU and the ring seems to be sufficient.

The ring seems to be related to pdump, is that the case?
I don't known of the top of my head.

In summary, looks like 1G is not enough for large MTU and pdump.
HTH,
fbl

> 
> 
> Please let me know if this is what you asked for.
> 
> Thanks
> Tobias
>   
> On 04.11.19, 15:50, "Flavio Leitner"  wrote:
> 
> 
> It would be nice if you share the DPDK options used in OvS.
> 
> On Sat, 2 Nov 2019 15:43:18 +
> "Tobias Hofmann \(tohofman\) via discuss"
>  wrote:
> 
> > Hello community,
> > 
> > My team and I observe a strange behavior on our system with the
> > creation of dpdk ports in OVS. We have a CentOS 7 system with
> > OpenvSwitch and only one single port of type ‘dpdk’ attached to
> > a bridge. The MTU size of the DPDK port is 9216 and the reserved
> > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
> > HugePage memory.
> > 
> > Setting everything up works fine, however after I reboot my
> > box, the dpdk port is in error state and I can observe this
> > line in the logs (full logs attached to the mail):
> > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
> > Invalid argument
> > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> > interface dpdk-p0 new configuration
> > 
> > I figured out that by restarting the openvswitch process, the
> > issue with the port is resolved and it is back in a working
> > state. However, as soon as I reboot the system a second time,
> > the port comes up in error state again. Now, we have also
> > observed a couple of other workarounds that I can’t really
> > explain why they help:
> > 
> >   *   When there is also a VM deployed on the system that is
> > using ports of type ‘dpdkvhostuserclient’, we never see any
> > issues like that. (MTU size of the VM ports is 9216 by the way)
> >   *   When we increase the HugePage memory for OVS to 2GB, we
> > also don’t see any issues.
> >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> > helps to prevent this issue.
> > 
> > Can anyone explain this?
> > 
> > We’re using the following versions:
> > Openvswitch: 2.9.3
> > DPDK: 17.11.5
> > 
> > Appreciate any help!
> > Tobias  
> 
> 
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-04 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

thanks for reaching out.

The DPDK options used in OvS are:

other_config:pmd-cpu-mask=0x202
other_config:dpdk-socket-mem=1024
other_config:dpdk-init=true


For the dpdk port, we set:

type=dpdk
options:dpdk-devargs=:08:0b.2
external_ids:unused-drv=i40evf 
mtu_request=9216


Please let me know if this is what you asked for.

Thanks
Tobias

On 04.11.19, 15:50, "Flavio Leitner"  wrote:


It would be nice if you share the DPDK options used in OvS.

On Sat, 2 Nov 2019 15:43:18 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hello community,
> 
> My team and I observe a strange behavior on our system with the
> creation of dpdk ports in OVS. We have a CentOS 7 system with
> OpenvSwitch and only one single port of type ‘dpdk’ attached to a
> bridge. The MTU size of the DPDK port is 9216 and the reserved
> HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total HugePage
> memory.
> 
> Setting everything up works fine, however after I reboot my box, the
> dpdk port is in error state and I can observe this line in the logs
> (full logs attached to the mail):
> 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: Invalid
> argument 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> interface dpdk-p0 new configuration
> 
> I figured out that by restarting the openvswitch process, the issue
> with the port is resolved and it is back in a working state. However,
> as soon as I reboot the system a second time, the port comes up in
> error state again. Now, we have also observed a couple of other
> workarounds that I can’t really explain why they help:
> 
>   *   When there is also a VM deployed on the system that is using
> ports of type ‘dpdkvhostuserclient’, we never see any issues like
> that. (MTU size of the VM ports is 9216 by the way)
>   *   When we increase the HugePage memory for OVS to 2GB, we also
> don’t see any issues.
>   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> helps to prevent this issue.
> 
> Can anyone explain this?
> 
> We’re using the following versions:
> Openvswitch: 2.9.3
> DPDK: 17.11.5
> 
> Appreciate any help!
> Tobias



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-04 Thread Flavio Leitner

It would be nice if you share the DPDK options used in OvS.

On Sat, 2 Nov 2019 15:43:18 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hello community,
> 
> My team and I observe a strange behavior on our system with the
> creation of dpdk ports in OVS. We have a CentOS 7 system with
> OpenvSwitch and only one single port of type ‘dpdk’ attached to a
> bridge. The MTU size of the DPDK port is 9216 and the reserved
> HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total HugePage
> memory.
> 
> Setting everything up works fine, however after I reboot my box, the
> dpdk port is in error state and I can observe this line in the logs
> (full logs attached to the mail):
> 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: Invalid
> argument 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> interface dpdk-p0 new configuration
> 
> I figured out that by restarting the openvswitch process, the issue
> with the port is resolved and it is back in a working state. However,
> as soon as I reboot the system a second time, the port comes up in
> error state again. Now, we have also observed a couple of other
> workarounds that I can’t really explain why they help:
> 
>   *   When there is also a VM deployed on the system that is using
> ports of type ‘dpdkvhostuserclient’, we never see any issues like
> that. (MTU size of the VM ports is 9216 by the way)
>   *   When we increase the HugePage memory for OVS to 2GB, we also
> don’t see any issues.
>   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> helps to prevent this issue.
> 
> Can anyone explain this?
> 
> We’re using the following versions:
> Openvswitch: 2.9.3
> DPDK: 17.11.5
> 
> Appreciate any help!
> Tobias

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-12 Thread Nicolas Vazquez
Thanks Ben and Alon for your help.

I want to share the solution in case someone else hits the same issue:
dpdkvhostuser type requires the bridge to be created with
datatype_path=netdev as stated in the documentation:
http://docs.openvswitch.org/en/latest/howto/dpdk/

Regards,
Nicolas Vazquez

El jue., 11 de julio de 2019 17:39, Ben Pfaff  escribió:

> I forgot to finish my thought.
>
> For the most recent version of OVS, if DPDK is supported, so is
> dpdkvhostuser (and dpdkvhostuserclient, which is not deprecated).  Maybe
> dpdkvhostuser was optional in 2.9.
>
> On Thu, Jul 11, 2019 at 01:22:20PM -0700, Ben Pfaff wrote:
> > It looks like dpdkvhostuser ports are deprecated, although they should
> > still work.
> >
> > On Thu, Jul 11, 2019 at 12:35:53AM -0300, Nicolas Vazquez wrote:
> > > My mistake :)
> > >
> > > Been checking the /var/run/openvswitch/ovs-vswitchd.log and found this
> is
> > > the error:
> > >
> > > 2019-07-11T03:11:31.789Z|00226|netdev_dpdk|INFO|Socket
> > > /var/run/openvswitch/test created for vhost-user port test
> > > 2019-07-11T03:11:31.790Z|00227|dpif_netlink|WARN|system@ovs-system:
> cannot
> > > create port `test' because it has unsupported type `dpdkvhostuser'
> > >
> > > I thought maybe DPDK was not being initialized properly but
> surprisingly
> > > noticed these earlier in the logs:
> > >
> > > 2019-07-10T18:56:02.183Z|00039|dpdk|INFO|DPDK Enabled - initializing...
> > > 2019-07-10T18:56:02.183Z|00040|dpdk|INFO|No vhost-sock-dir provided -
> > > defaulting to /var/run/openvswitch
> > > 2019-07-10T18:56:02.183Z|00041|dpdk|INFO|EAL ARGS: ovs-vswitchd
> > > --socket-mem 1024,0 -c 0x0001
> > > 2019-07-10T18:56:03.846Z|00042|dpdk|INFO|DPDK pdump packet capture
> enabled
> > > 2019-07-10T18:56:03.851Z|00043|dpdk|INFO|DPDK Enabled - initialized
> > >
> > > El mié., 10 jul. 2019 a las 16:38, Ben Pfaff () escribió:
> > >
> > > > On Wed, Jul 10, 2019 at 04:16:14PM -0300, Nicolas Vazquez wrote:
> > > > > Thank you both.
> > > > >
> > > > > I tried replacing the OVS packages. Have bind NIC to DPDK support:
> > > > > # dpdk-devbind --status
> > > > >
> > > > > Network devices using DPDK-compatible driver
> > > > > 
> > > > > :02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> > > > > drv=igb_uio unused=
> > > > >
> > > > > However whenever I try to add a DPDK vHost user port it fails:
> > > > > # ovs-vsctl add-port cloudbr0 test -- set Interface test
> > > > type=dpdkvhostuser
> > > > > ovs-vsctl: Error detected while setting up 'test': could not add
> network
> > > > > device test to ofproto (Invalid argument).  See ovs-vswitchd log
> for
> > > > > details.
> > > > > ovs-vsctl: The default log directory is "/var/log/openvswitch".
> > > >
> > > > Did you look in the log?
> > > >
> > > > The above does *not* mean to run "ovs-vswitchd log", which is not
> going
> > > > to be helpful.
> > > >
> > > > > # ovs-vswitchd log
> > > > > 2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on
> NUMA
> > > > > node 0
> > > > > 2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes
> and 3
> > > > CPU
> > > > > cores
> > > > > 2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
> > > > > 2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt
> failed
> > > > > (Address family not supported by protocol)
> > > > > 2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds
> before
> > > > > reconnect
> > > > > 2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
> > > > > 2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt
> failed
> > > > > (Address family not supported by protocol)
> > > > > 2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds
> before
> > > > > reconnect
> > > >
> > > >
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-11 Thread Ben Pfaff
I forgot to finish my thought.

For the most recent version of OVS, if DPDK is supported, so is
dpdkvhostuser (and dpdkvhostuserclient, which is not deprecated).  Maybe
dpdkvhostuser was optional in 2.9.

On Thu, Jul 11, 2019 at 01:22:20PM -0700, Ben Pfaff wrote:
> It looks like dpdkvhostuser ports are deprecated, although they should
> still work.
> 
> On Thu, Jul 11, 2019 at 12:35:53AM -0300, Nicolas Vazquez wrote:
> > My mistake :)
> > 
> > Been checking the /var/run/openvswitch/ovs-vswitchd.log and found this is
> > the error:
> > 
> > 2019-07-11T03:11:31.789Z|00226|netdev_dpdk|INFO|Socket
> > /var/run/openvswitch/test created for vhost-user port test
> > 2019-07-11T03:11:31.790Z|00227|dpif_netlink|WARN|system@ovs-system: cannot
> > create port `test' because it has unsupported type `dpdkvhostuser'
> > 
> > I thought maybe DPDK was not being initialized properly but surprisingly
> > noticed these earlier in the logs:
> > 
> > 2019-07-10T18:56:02.183Z|00039|dpdk|INFO|DPDK Enabled - initializing...
> > 2019-07-10T18:56:02.183Z|00040|dpdk|INFO|No vhost-sock-dir provided -
> > defaulting to /var/run/openvswitch
> > 2019-07-10T18:56:02.183Z|00041|dpdk|INFO|EAL ARGS: ovs-vswitchd
> > --socket-mem 1024,0 -c 0x0001
> > 2019-07-10T18:56:03.846Z|00042|dpdk|INFO|DPDK pdump packet capture enabled
> > 2019-07-10T18:56:03.851Z|00043|dpdk|INFO|DPDK Enabled - initialized
> > 
> > El mié., 10 jul. 2019 a las 16:38, Ben Pfaff () escribió:
> > 
> > > On Wed, Jul 10, 2019 at 04:16:14PM -0300, Nicolas Vazquez wrote:
> > > > Thank you both.
> > > >
> > > > I tried replacing the OVS packages. Have bind NIC to DPDK support:
> > > > # dpdk-devbind --status
> > > >
> > > > Network devices using DPDK-compatible driver
> > > > 
> > > > :02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> > > > drv=igb_uio unused=
> > > >
> > > > However whenever I try to add a DPDK vHost user port it fails:
> > > > # ovs-vsctl add-port cloudbr0 test -- set Interface test
> > > type=dpdkvhostuser
> > > > ovs-vsctl: Error detected while setting up 'test': could not add network
> > > > device test to ofproto (Invalid argument).  See ovs-vswitchd log for
> > > > details.
> > > > ovs-vsctl: The default log directory is "/var/log/openvswitch".
> > >
> > > Did you look in the log?
> > >
> > > The above does *not* mean to run "ovs-vswitchd log", which is not going
> > > to be helpful.
> > >
> > > > # ovs-vswitchd log
> > > > 2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on NUMA
> > > > node 0
> > > > 2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 3
> > > CPU
> > > > cores
> > > > 2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
> > > > 2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt failed
> > > > (Address family not supported by protocol)
> > > > 2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds before
> > > > reconnect
> > > > 2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
> > > > 2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt failed
> > > > (Address family not supported by protocol)
> > > > 2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds before
> > > > reconnect
> > >
> > >
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-11 Thread Ben Pfaff
It looks like dpdkvhostuser ports are deprecated, although they should
still work.

On Thu, Jul 11, 2019 at 12:35:53AM -0300, Nicolas Vazquez wrote:
> My mistake :)
> 
> Been checking the /var/run/openvswitch/ovs-vswitchd.log and found this is
> the error:
> 
> 2019-07-11T03:11:31.789Z|00226|netdev_dpdk|INFO|Socket
> /var/run/openvswitch/test created for vhost-user port test
> 2019-07-11T03:11:31.790Z|00227|dpif_netlink|WARN|system@ovs-system: cannot
> create port `test' because it has unsupported type `dpdkvhostuser'
> 
> I thought maybe DPDK was not being initialized properly but surprisingly
> noticed these earlier in the logs:
> 
> 2019-07-10T18:56:02.183Z|00039|dpdk|INFO|DPDK Enabled - initializing...
> 2019-07-10T18:56:02.183Z|00040|dpdk|INFO|No vhost-sock-dir provided -
> defaulting to /var/run/openvswitch
> 2019-07-10T18:56:02.183Z|00041|dpdk|INFO|EAL ARGS: ovs-vswitchd
> --socket-mem 1024,0 -c 0x0001
> 2019-07-10T18:56:03.846Z|00042|dpdk|INFO|DPDK pdump packet capture enabled
> 2019-07-10T18:56:03.851Z|00043|dpdk|INFO|DPDK Enabled - initialized
> 
> El mié., 10 jul. 2019 a las 16:38, Ben Pfaff () escribió:
> 
> > On Wed, Jul 10, 2019 at 04:16:14PM -0300, Nicolas Vazquez wrote:
> > > Thank you both.
> > >
> > > I tried replacing the OVS packages. Have bind NIC to DPDK support:
> > > # dpdk-devbind --status
> > >
> > > Network devices using DPDK-compatible driver
> > > 
> > > :02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> > > drv=igb_uio unused=
> > >
> > > However whenever I try to add a DPDK vHost user port it fails:
> > > # ovs-vsctl add-port cloudbr0 test -- set Interface test
> > type=dpdkvhostuser
> > > ovs-vsctl: Error detected while setting up 'test': could not add network
> > > device test to ofproto (Invalid argument).  See ovs-vswitchd log for
> > > details.
> > > ovs-vsctl: The default log directory is "/var/log/openvswitch".
> >
> > Did you look in the log?
> >
> > The above does *not* mean to run "ovs-vswitchd log", which is not going
> > to be helpful.
> >
> > > # ovs-vswitchd log
> > > 2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on NUMA
> > > node 0
> > > 2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 3
> > CPU
> > > cores
> > > 2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
> > > 2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt failed
> > > (Address family not supported by protocol)
> > > 2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds before
> > > reconnect
> > > 2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
> > > 2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt failed
> > > (Address family not supported by protocol)
> > > 2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds before
> > > reconnect
> >
> >
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Nicolas Vazquez
My mistake :)

Been checking the /var/run/openvswitch/ovs-vswitchd.log and found this is
the error:

2019-07-11T03:11:31.789Z|00226|netdev_dpdk|INFO|Socket
/var/run/openvswitch/test created for vhost-user port test
2019-07-11T03:11:31.790Z|00227|dpif_netlink|WARN|system@ovs-system: cannot
create port `test' because it has unsupported type `dpdkvhostuser'

I thought maybe DPDK was not being initialized properly but surprisingly
noticed these earlier in the logs:

2019-07-10T18:56:02.183Z|00039|dpdk|INFO|DPDK Enabled - initializing...
2019-07-10T18:56:02.183Z|00040|dpdk|INFO|No vhost-sock-dir provided -
defaulting to /var/run/openvswitch
2019-07-10T18:56:02.183Z|00041|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024,0 -c 0x0001
2019-07-10T18:56:03.846Z|00042|dpdk|INFO|DPDK pdump packet capture enabled
2019-07-10T18:56:03.851Z|00043|dpdk|INFO|DPDK Enabled - initialized

El mié., 10 jul. 2019 a las 16:38, Ben Pfaff () escribió:

> On Wed, Jul 10, 2019 at 04:16:14PM -0300, Nicolas Vazquez wrote:
> > Thank you both.
> >
> > I tried replacing the OVS packages. Have bind NIC to DPDK support:
> > # dpdk-devbind --status
> >
> > Network devices using DPDK-compatible driver
> > 
> > :02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> > drv=igb_uio unused=
> >
> > However whenever I try to add a DPDK vHost user port it fails:
> > # ovs-vsctl add-port cloudbr0 test -- set Interface test
> type=dpdkvhostuser
> > ovs-vsctl: Error detected while setting up 'test': could not add network
> > device test to ofproto (Invalid argument).  See ovs-vswitchd log for
> > details.
> > ovs-vsctl: The default log directory is "/var/log/openvswitch".
>
> Did you look in the log?
>
> The above does *not* mean to run "ovs-vswitchd log", which is not going
> to be helpful.
>
> > # ovs-vswitchd log
> > 2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on NUMA
> > node 0
> > 2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 3
> CPU
> > cores
> > 2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
> > 2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt failed
> > (Address family not supported by protocol)
> > 2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds before
> > reconnect
> > 2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
> > 2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt failed
> > (Address family not supported by protocol)
> > 2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds before
> > reconnect
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Ben Pfaff
On Wed, Jul 10, 2019 at 04:16:14PM -0300, Nicolas Vazquez wrote:
> Thank you both.
> 
> I tried replacing the OVS packages. Have bind NIC to DPDK support:
> # dpdk-devbind --status
> 
> Network devices using DPDK-compatible driver
> 
> :02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> drv=igb_uio unused=
> 
> However whenever I try to add a DPDK vHost user port it fails:
> # ovs-vsctl add-port cloudbr0 test -- set Interface test type=dpdkvhostuser
> ovs-vsctl: Error detected while setting up 'test': could not add network
> device test to ofproto (Invalid argument).  See ovs-vswitchd log for
> details.
> ovs-vsctl: The default log directory is "/var/log/openvswitch".

Did you look in the log?

The above does *not* mean to run "ovs-vswitchd log", which is not going
to be helpful.

> # ovs-vswitchd log
> 2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on NUMA
> node 0
> 2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 3 CPU
> cores
> 2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
> 2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt failed
> (Address family not supported by protocol)
> 2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds before
> reconnect
> 2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
> 2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt failed
> (Address family not supported by protocol)
> 2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds before
> reconnect

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Nicolas Vazquez
Thank you both.

I tried replacing the OVS packages. Have bind NIC to DPDK support:
# dpdk-devbind --status

Network devices using DPDK-compatible driver

:02:04.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
drv=igb_uio unused=

However whenever I try to add a DPDK vHost user port it fails:
# ovs-vsctl add-port cloudbr0 test -- set Interface test type=dpdkvhostuser
ovs-vsctl: Error detected while setting up 'test': could not add network
device test to ofproto (Invalid argument).  See ovs-vswitchd log for
details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

# ovs-vswitchd log
2019-07-10T19:11:29Z|1|ovs_numa|INFO|Discovered 3 CPU cores on NUMA
node 0
2019-07-10T19:11:29Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 3 CPU
cores
2019-07-10T19:11:29Z|3|reconnect|INFO|log: connecting...
2019-07-10T19:11:29Z|4|reconnect|INFO|log: connection attempt failed
(Address family not supported by protocol)
2019-07-10T19:11:29Z|5|reconnect|INFO|log: waiting 1 seconds before
reconnect
2019-07-10T19:11:30Z|6|reconnect|INFO|log: connecting...
2019-07-10T19:11:30Z|7|reconnect|INFO|log: connection attempt failed
(Address family not supported by protocol)
2019-07-10T19:11:30Z|8|reconnect|INFO|log: waiting 2 seconds before
reconnect

El mié., 10 jul. 2019 a las 12:55, Ben Pfaff () escribió:

> On Wed, Jul 10, 2019 at 10:30:12AM -0300, Nicolas Vazquez wrote:
> > I've been trying to extend an existing OVS installation on CentOS 7 to
> > support DPDK, but could not find a way without a clean install. Is it
> > possible to configure OVS with DPDK without reinstalling it? [1]
> >
> > # ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> > # ovs-vsctl get Open_vSwitch . dpdk_initialized
> > ovs-vsctl: Open_vSwitch does not contain a column whose name matches
> > "dpdk_initialized"
>
> It sounds like you might need to upgrade the database.  The OVS
> installation guide has instructions.
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Ben Pfaff
On Wed, Jul 10, 2019 at 10:30:12AM -0300, Nicolas Vazquez wrote:
> I've been trying to extend an existing OVS installation on CentOS 7 to
> support DPDK, but could not find a way without a clean install. Is it
> possible to configure OVS with DPDK without reinstalling it? [1]
> 
> # ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> # ovs-vsctl get Open_vSwitch . dpdk_initialized
> ovs-vsctl: Open_vSwitch does not contain a column whose name matches
> "dpdk_initialized"

It sounds like you might need to upgrade the database.  The OVS
installation guide has instructions.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Alon Dotan
I dont think is dpdk enabled package...
you need to get the ovs from the openstack repo...

From: Nicolas Vazquez 
Sent: Wednesday, July 10, 2019 5:05 PM
To: Alon Dotan
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] OVS+DPDK on CentOS 7

Version is 2.9.2 and package is 'openvswitch-2.9.2-1.el7' from CentOS repo

El mié., 10 jul. 2019 a las 10:54, Alon Dotan 
(mailto:alon.do...@bullguard.com>>) escribió:
which version of ovs you are using?
from which repo? there is issue with EL7 packaging and ovs dpdk

From: 
ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org> 
mailto:ovs-discuss-boun...@openvswitch.org>>
 on behalf of Nicolas Vazquez 
mailto:nicovazque...@gmail.com>>
Sent: Wednesday, July 10, 2019 4:30 PM
To: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: [ovs-discuss] OVS+DPDK on CentOS 7

Hi all,

I've been trying to extend an existing OVS installation on CentOS 7 to support 
DPDK, but could not find a way without a clean install. Is it possible to 
configure OVS with DPDK without reinstalling it? [1]

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# ovs-vsctl get Open_vSwitch . dpdk_initialized
ovs-vsctl: Open_vSwitch does not contain a column whose name matches 
"dpdk_initialized"

[1] http://docs.openvswitch.org/en/latest/intro/install/dpdk/#install-ovs

Regards,
Nicolas Vazquez


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Nicolas Vazquez
Version is 2.9.2 and package is 'openvswitch-2.9.2-1.el7' from CentOS repo

El mié., 10 jul. 2019 a las 10:54, Alon Dotan ()
escribió:

> which version of ovs you are using?
> from which repo? there is issue with EL7 packaging and ovs dpdk
> --
> *From:* ovs-discuss-boun...@openvswitch.org <
> ovs-discuss-boun...@openvswitch.org> on behalf of Nicolas Vazquez <
> nicovazque...@gmail.com>
> *Sent:* Wednesday, July 10, 2019 4:30 PM
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] OVS+DPDK on CentOS 7
>
> Hi all,
>
> I've been trying to extend an existing OVS installation on CentOS 7 to
> support DPDK, but could not find a way without a clean install. Is it
> possible to configure OVS with DPDK without reinstalling it? [1]
>
> # ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> # ovs-vsctl get Open_vSwitch . dpdk_initialized
> ovs-vsctl: Open_vSwitch does not contain a column whose name matches
> "dpdk_initialized"
>
> [1] http://docs.openvswitch.org/en/latest/intro/install/dpdk/#install-ovs
>
> Regards,
> Nicolas Vazquez
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK on CentOS 7

2019-07-10 Thread Alon Dotan
which version of ovs you are using?
from which repo? there is issue with EL7 packaging and ovs dpdk

From: ovs-discuss-boun...@openvswitch.org  
on behalf of Nicolas Vazquez 
Sent: Wednesday, July 10, 2019 4:30 PM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] OVS+DPDK on CentOS 7

Hi all,

I've been trying to extend an existing OVS installation on CentOS 7 to support 
DPDK, but could not find a way without a clean install. Is it possible to 
configure OVS with DPDK without reinstalling it? [1]

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# ovs-vsctl get Open_vSwitch . dpdk_initialized
ovs-vsctl: Open_vSwitch does not contain a column whose name matches 
"dpdk_initialized"

[1] http://docs.openvswitch.org/en/latest/intro/install/dpdk/#install-ovs

Regards,
Nicolas Vazquez


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

2019-05-07 Thread Harsh Gondaliya
So is there any way to have TSO work with OVS-DPDK? Are there any patches
which can be applied? Because I followed this Intel page and the author was
able to get the 2.5x higher throughput for OVS-DPDK as compared to native
OVS.
https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server
In fact, this topic has been discussed quite a lot in the past and many
patches have been uploaded. Are these patches already applied to OVS 2.11
or do we need to apply them separately?

Being a student and a beginner with Linux itself, I do not know how all
these patches work and how do we apply them.

I think the reason of lower throughput in the scenario of OVS-DPDK is that
>> TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs
>> are limited to the MTU of the vhostuser ports.
>>
>
> And the kernel based OVS supports TSO(GSO), the TCP packets can be up
> to 64KB, so the throughput of iperf between two VMs is much higher.
>
>
>
>
>
> 徐斌斌 xubinbin
>
>
> 软件开发工程师 Software Development Engineer
> 虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D
> Institute/Wireless Product Operation
>
>
>
> 南京市雨花台区花神大道6号中兴通讯
> 4/F, R Building, No.6 Huashen Road,
> Yuhuatai District, Nanjing, P.R. China,
> M: +86 13851437610
> E: xu.binb...@zte.com.cn
> www.zte.com.cn
> 原始邮件
> *发件人:*HarshGondaliya 
> *收件人:*ovs-discuss ;
> *日 期 :*2019年04月12日 15:34
> *主 题 :**[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS*
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
> I had connected two VMs to native OVS bridge and I got iperf test result
> of around *35-37Gbps*.
> Now when I am performing similar tests with two VMs connected to OVS-DPDK
> bridge using vhostuser ports I am getting the iperf test results as around 
> *6-6.5
> Gbps.*
> I am unable to understand the reason for such low throughput in case of
> OVS-DPDK. I am using OVS version 2.11.0
>
> I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB
> system. I have allocated 6GB for the hugepages pool. 2GB of it was given to
> OVS socket mem option and the remaining 4GB was given to Virtual machines
> for memory backing (2Gb per VM). These are some of the configurations of
> my OVS-DPDK bridge:
>
> root@dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd
> unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
> 2019-04-12T07:01:00Z|1|ovs_numa|INFO|Discovered 8 CPU cores on NUMA
> node 0
> 2019-04-12T07:01:00Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU
> cores
> 2019-04-12T07:01:00Z|3|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connecting...
> 2019-04-12T07:01:00Z|4|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
> connected
> 2019-04-12T07:01:00Z|5|dpdk|INFO|Using DPDK 18.11.0
> 2019-04-12T07:01:00Z|6|dpdk|INFO|DPDK Enabled - initializing...
> 2019-04-12T07:01:00Z|7|dpdk|INFO|No vhost-sock-dir provided -
> defaulting to /usr/local/var/run/openvswitch
> 2019-04-12T07:01:00Z|8|dpdk|INFO|IOMMU support for vhost-user-client
> disabled.
> 2019-04-12T07:01:00Z|9|dpdk|INFO|Per port memory for DPDK devices
> disabled.
> 2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA
> --socket-mem 2048 --socket-limit 2048.
> 2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
> 2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
> 2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket
> /var/run/dpdk/rte/mp_socket
> 2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
> 2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device :00:1f.6 on NUMA
> socket -1
> 2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default
> to 0
> 2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8
> net_e1000_em
> 2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
> 2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports recirculation
> 2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN
> header stack length probed as 1
> 2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS
> label stack length probed as 3
> 2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports truncate action
> 2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports unique flow ids
> 2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports clone action
> 2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> sample nesting level probed as 10
> 2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports eventmask in conntrack action
> 2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
> supports ct_clear action
> 2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> dp_hash 

Re: [ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

2019-04-14 Thread xu.binbin1
I think the reason of lower throughput in the scenario of OVS-DPDK is that 
TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs


are limited to the MTU of the vhostuser ports. 






And the kernel based OVS supports TSO(GSO), the TCP packets can be up to 
64KB, so the throughput of iperf between two VMs is much higher.  























徐斌斌 xubinbin






软件开发工程师 Software Development
Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation









南京市雨花台区花神大道6号中兴通讯 
4/F, R Building, No.6 Huashen Road, 
Yuhuatai District, Nanjing, P.R. China,
M: +86 13851437610
E: xu.binb...@zte.com.cn 
www.zte.com.cn










原始邮件



发件人:HarshGondaliya 
收件人:ovs-discuss ;
日 期 :2019年04月12日 15:34
主 题 :[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



I had connected two VMs to native OVS bridge and I got iperf test result of 
around 35-37Gbps.Now when I am performing similar tests with two VMs connected 
to OVS-DPDK bridge using vhostuser ports I am getting the iperf test results as 
around 6-6.5 Gbps.
I am unable to understand the reason for such low throughput in case of 
OVS-DPDK. I am using OVS version 2.11.0


I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB system. 
I have allocated 6GB for the hugepages pool. 2GB of it was given to OVS socket 
mem option and the remaining 4GB was given to Virtual machines for memory 
backing (2Gb per VM). These are some of the configurations of my OVS-DPDK 
bridge:


root@dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd 
unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
2019-04-12T07:01:00Z|1|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 0
2019-04-12T07:01:00Z|2|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU cores
2019-04-12T07:01:00Z|3|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connecting...
2019-04-12T07:01:00Z|4|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connected
2019-04-12T07:01:00Z|5|dpdk|INFO|Using DPDK 18.11.0
2019-04-12T07:01:00Z|6|dpdk|INFO|DPDK Enabled - initializing...
2019-04-12T07:01:00Z|7|dpdk|INFO|No vhost-sock-dir provided - defaulting to 
/usr/local/var/run/openvswitch
2019-04-12T07:01:00Z|8|dpdk|INFO|IOMMU support for vhost-user-client 
disabled.
2019-04-12T07:01:00Z|9|dpdk|INFO|Per port memory for DPDK devices disabled.
2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA --socket-mem 
2048 --socket-limit 2048.
2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket 
/var/run/dpdk/rte/mp_socket
2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device :00:1f.6 on NUMA 
socket -1
2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8 net_e1000_em
2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports recirculation
2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN header 
stack length probed as 1
2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label 
stack length probed as 3
2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports truncate action
2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports unique flow ids
2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports clone action
2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max sample 
nesting level probed as 10
2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports eventmask in conntrack action
2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_clear action
2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max dp_hash 
algorithm probed to be 1
2019-04-12T07:01:00Z|00029|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_state
2019-04-12T07:01:00Z|00030|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_zone
2019-04-12T07:01:00Z|00031|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_mark
2019-04-12T07:01:00Z|00032|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_label
2019-04-12T07:01:00Z|00033|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_state_nat
2019-04-12T07:01:00Z|00034|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_orig_tuple
2019-04-12T07:01:00Z|00035|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports ct_orig_tuple6
2019-04-12T07:01:00Z|00036|dpdk|INFO|VHOST_CONFIG: vhost-user 

Re: [ovs-discuss] OVS-DPDK fails after clearing buffer

2019-04-08 Thread Burakov, Anatoly
> -Original Message-
> From: Tobias Hofmann -T (tohofman - AAP3 INC at Cisco)
> [mailto:tohof...@cisco.com]
> Sent: Friday, April 5, 2019 9:39 PM
> To: b...@openvswitch.org; Burakov, Anatoly 
> Cc: Shriroop Joshi (shrirjos) ; Stokes, Ian
> 
> Subject: Re: [ovs-discuss] OVS-DPDK fails after clearing buffer
> 
> Hi Anatoly,
> 
> I just wanted to follow up on the issue reported below. (It's already been 2
> weeks ago)
> 
> I don’t really understand the first solution you suggested: use IOVA as VA
> mode Does that mean I shall load vfio-pci driver before I set dpdk-init to
> true? So, doing a 'modprobe vfio-pci'? Actually I use vfio-pci but I wait with
> loading the vfio-pci until I actually bind an interface to it.

Hi Tobias,

As far as I can remember, in 18.08, IOVA as VA mode will be enabled if

0) modprobe vfio-pci, enable IOMMU in the BIOS, etc.
1) you have *at least one physical device* (otherwise EAL defaults to IOVA as 
PA mode)
2) *all* of your *physical* devices are bound to vfio-pci

Provided all of this is true, DPDK should run in IOVA as VA mode.

Alternatively, DPDK 17.11 and 18.11 will have --iova-mode command-line switch 
which will allow forcing IOVA as VA mode if possible, but I'm not sure if 18.08 
has it.

> 
> Also, to answer your last question: Transparent HugePages are enabled. I've
> just disabled them and was still able to reproduce the issue.

Unfortunately, I can't be of much help here as I did not look into how vmcaches 
work on Linux, let alone what happens when hugepages end up in said cache. I 
obviously don't know specifics of your use case and whether it's really 
necessary to drop caches, however a cursory Google search indicates that the 
general sentiment seems to be that you shouldn't drop caches in the first 
place, and that it is not a good practice in general.

> 
> Regards
> Toby
> 
> 
> On 3/21/19, 12:19 PM, "Ian Stokes"  wrote:
> 
> On 3/20/2019 10:37 PM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco)
> via discuss wrote:
> > Hello,
> >
> 
> Hi,
> 
> I wasnt sure at first glance what was happening so discussed with
> Anatoly (Cc'd) who has worked a considerable amount with DPDK memory
> models. Please see response below to what the suspected issue is.
> Anatoly, thanks for you input on this.
> 
> > I want to use Open vSwitch with DPDK enabled. For this purpose, I first
> > allocate 512 HugePages of size 2MB to have a total of 1GB of HugePage
> > memory available for OVS-DPDK. (I don’t set any value for
> > */dpdk-socket-mem/ *so the default value of 1GB is taken). Then I set
> > */dpdk-init=true/*. This normally works fine.
> >
> > However, I have realized that I can’t allocate HugePages from memory
> > that is inside the buff/cache (visible through */free -h/*). To solve
> > this issue, I decided to clear the cache/buffer in Linux before
> > allocating HugePages by running */echo 1 >
> /proc/sys/vm/drop_caches/*.
> >
> > After that, allocation of the HugePages still works fine. However, when
> > I then run */ovs-vsctl set open_vswitch other_config:dpdk-init=true/*
> > the process crashes and inside the ovs-vswitchd.log I observe the
> following:
> >
> > *ovs-vswitchd log output:*
> >
> > 2019-03-18T13:32:41.112Z|00015|dpdk|ERR|EAL: Can only reserve 270
> pages
> > from 512 requested
> >
> > Current CONFIG_RTE_MAX_MEMSEG=256 is not enough
> 
> After you drop the cache, from the above log it is clear that, as a
> result, hugepages’ physical addresses get fragmented, as DPDK cannot
> concatenate pages into segments any more (which results in
> 1-page-per-segment type situation which causes you to run out of
> memseg
> structures, of which there are only 256). We have no control over what
> addresses we get from the OS, so there’s really no way to “unfragment”
> the pages.
> 
> So, the above only happens when
> 
> 1) you’re running in IOVA as PA mode (so, using real physical addresses).
> 2) your hugepages are heavily fragmented.
> 
> Possible solutions for this are:
> 
> 1. Use IOVA as VA mode (so, use VFIO, not igb_uio), this way, the pages
> will still be fragmented, but the IOMMU will remap them to be contiguous
> – this is the recommended option, with VFIO being available it is the
> better choice than igb_uio.
> 
> 2. Use bigger page sizes. Strictly speaking, this isn’t a solution as
> memory would be fragmented too, but a 1GB-long standalone segment is
> way
> more useful than a standalone 2MB-long segment.
>

Re: [ovs-discuss] OVS-DPDK fails after clearing buffer

2019-04-05 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi Anatoly,

I just wanted to follow up on the issue reported below. (It's already been 2 
weeks ago)

I don’t really understand the first solution you suggested: use IOVA as VA mode
Does that mean I shall load vfio-pci driver before I set dpdk-init to true? So, 
doing a 'modprobe vfio-pci'? Actually I use vfio-pci but I wait with loading 
the vfio-pci until I actually bind an interface to it.

Also, to answer your last question: Transparent HugePages are enabled. I've 
just disabled them and was still able to reproduce the issue.

Regards
Toby


On 3/21/19, 12:19 PM, "Ian Stokes"  wrote:

On 3/20/2019 10:37 PM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) 
via discuss wrote:
> Hello,
> 

Hi,

I wasnt sure at first glance what was happening so discussed with 
Anatoly (Cc'd) who has worked a considerable amount with DPDK memory 
models. Please see response below to what the suspected issue is. 
Anatoly, thanks for you input on this.

> I want to use Open vSwitch with DPDK enabled. For this purpose, I first 
> allocate 512 HugePages of size 2MB to have a total of 1GB of HugePage 
> memory available for OVS-DPDK. (I don’t set any value for 
> */dpdk-socket-mem/ *so the default value of 1GB is taken). Then I set 
> */dpdk-init=true/*. This normally works fine.
> 
> However, I have realized that I can’t allocate HugePages from memory 
> that is inside the buff/cache (visible through */free -h/*). To solve 
> this issue, I decided to clear the cache/buffer in Linux before 
> allocating HugePages by running */echo 1 > /proc/sys/vm/drop_caches/*.
> 
> After that, allocation of the HugePages still works fine. However, when 
> I then run */ovs-vsctl set open_vswitch other_config:dpdk-init=true/* 
> the process crashes and inside the ovs-vswitchd.log I observe the 
following:
> 
> *ovs-vswitchd log output:*
> 
> 2019-03-18T13:32:41.112Z|00015|dpdk|ERR|EAL: Can only reserve 270 pages 
> from 512 requested
> 
> Current CONFIG_RTE_MAX_MEMSEG=256 is not enough

After you drop the cache, from the above log it is clear that, as a 
result, hugepages’ physical addresses get fragmented, as DPDK cannot 
concatenate pages into segments any more (which results in 
1-page-per-segment type situation which causes you to run out of memseg 
structures, of which there are only 256). We have no control over what 
addresses we get from the OS, so there’s really no way to “unfragment” 
the pages.

So, the above only happens when

1) you’re running in IOVA as PA mode (so, using real physical addresses).
2) your hugepages are heavily fragmented.

Possible solutions for this are:

1. Use IOVA as VA mode (so, use VFIO, not igb_uio), this way, the pages 
will still be fragmented, but the IOMMU will remap them to be contiguous 
– this is the recommended option, with VFIO being available it is the 
better choice than igb_uio.

2. Use bigger page sizes. Strictly speaking, this isn’t a solution as 
memory would be fragmented too, but a 1GB-long standalone segment is way 
more useful than a standalone 2MB-long segment.

3. Reboot (as you have done), maybe try re-reserving all pages? E.g.
i. Clean your hugetlbfs contents to free any leftover pages
ii. echo 0 > /sys/kernel/mm/hugepages/hugepage-/nr_hugepages
iii. echo 512 > /sys/kernel/mm/hugepages/hugepage-/nr_hugepages

Alternatively if you upgrade to OVs 2.11 it will use DPDK 18.11. This 
would make a difference as since DPDK 18.05+ we don’t require 
PA-contiguous segments any more

I would also question why these pages are in the regular page cache in 
the first place. Are transparent hugepages enabled?

HTL
Ian

> 
> Please either increase it or request less amount of memory.
> 
> 2019-03-18T13:32:41.112Z|00016|dpdk|ERR|EAL: Cannot init memory
> 
> 2019-03-18T13:32:41.128Z|2|daemon_unix|ERR|fork child died before 
> signaling startup (killed (Aborted))
> 
> 2019-03-18T13:32:41.128Z|3|daemon_unix|EMER|could not detach from 
> foreground session
> 
> *Tech Details:*
> 
>   * Open vSwitch version: 2.9.2
>   * DPDK version: 17.11
>   * System has only a single NUMA node.
> 
> This problem is consistently reproducible when having a relatively high 
> amount of memory in the buffer/cache (usually around 5GB) and clearing 
> the buffer afterwards with the command outlined above.
> 
> On the Internet, I found some posts saying that this is due to memory 
> fragmentation but normally I’m not even able to allocate HugePages in 
> the first place when my memory is already fragmented. In this scenario 
> however the allocation of HugePages works totally fine after clearing 
> the buffer so why would 

Re: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

2018-12-17 Thread David Christensen

But DPDK builds successfully by itself.  Any suggestions where the build
is breaking down?


I sent a patch for this problem
(https://patchwork.ozlabs.org/patch/1013466/).

Can you try (and maybe ack) it?


Confirm, the patch fixes the problem.  Thanks.

Dave

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

2018-12-16 Thread Timothy Redaelli
On Thu, 13 Dec 2018 16:53:55 -0800
David Christensen  wrote:

> Attempting to use DPDK 18.11 with Monday's OVS commit that supports DPDK 
> 18.11 (commit 03f3f9c0faf838a8506c3b5ce6199af401d13cb3).  When building 
> OVS with DPDK support I'm receiving a build error related to libmnl not 
> being found while compiling the Mellanox driver as follows:
> 
> ...
> gcc -std=gnu99 -DHAVE_CONFIG_H -I.-I ./include -I ./include -I ./lib 
> -I ./lib-Wstrict-prototypes -Wall -Wextra -Wno-sign-compare 
> -Wpointer-arith -Wformat -Wformat-security -Wswitch-enum 
> -Wunused-parameter -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
> -Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
> -fno-strict-aliasing -Wshadow 
> -I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include 
> -D_FILE_OFFSET_BITS=64  -g -O2 -MT vswitchd/xenserver.o -MD -MP -MF 
> $depbase.Tpo -c -o vswitchd/xenserver.o vswitchd/xenserver.c &&\
> mv -f $depbase.Tpo $depbase.Po
> /bin/sh ./libtool  --tag=CC   --mode=link gcc -std=gnu99 
> -Wstrict-prototypes -Wall -Wextra -Wno-sign-compare -Wpointer-arith 
> -Wformat -Wformat-security -Wswitch-enum -Wunused-parameter 
> -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
> -Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
> -fno-strict-aliasing -Wshadow 
> -I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include 
> -D_FILE_OFFSET_BITS=64  -g -O2 
> -L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib 
> -Wl,--whole-archive,-ldpdk,--no-whole-archive  -o vswitchd/ovs-vswitchd 
> vswitchd/bridge.o vswitchd/ovs-vswitchd.o vswitchd/system-stats.o 
> vswitchd/xenserver.o ofproto/libofproto.la lib/libsflow.la 
> lib/libopenvswitch.la -ldpdk -ldl -lnuma -latomic -lpthread -lrt -lm  -lnuma
> libtool: link: gcc -std=gnu99 -Wstrict-prototypes -Wall -Wextra 
> -Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security 
> -Wswitch-enum -Wunused-parameter -Wbad-function-cast -Wcast-align 
> -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
> -Wmissing-field-initializers -fno-strict-aliasing -Wshadow 
> -I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include 
> -D_FILE_OFFSET_BITS=64 -g -O2 -Wl,--whole-archive -Wl,-ldpdk 
> -Wl,--no-whole-archive -o vswitchd/ovs-vswitchd vswitchd/bridge.o 
> vswitchd/ovs-vswitchd.o vswitchd/system-stats.o vswitchd/xenserver.o 
> -L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib 
> ofproto/.libs/libofproto.a 
> /home/davec/src/p9-dpdk-perf/ovs/lib/.libs/libsflow.a 
> lib/.libs/libsflow.a lib/.libs/libopenvswitch.a -ldpdk -ldl -latomic 
> -lpthread -lrt -lm -lnuma
> /home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib/librte_pmd_mlx5.a(mlx5_flow_tcf.o):
>  
> In function `flow_tcf_nl_ack':
> /home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3753: 
> undefined reference to `mnl_socket_get_portid'
> /home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3765: 
> undefined reference to `mnl_socket_sendto'
> /home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
> undefined reference to `mnl_socket_recvfrom'
> /home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3790: 
> undefined reference to `mnl_cb_run'
> /home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
> undefined reference to `mnl_socket_recvfrom'
> ...
> 
> Both the OVS and DPDK builds work individually but I receive the error 
> after running "./configure --with-dpdk=; make" to 
> build OVS with DPDK.  I ran across this post on the DPDK list regarding 
> libmnl, indicating there is a dependency issue:
> 
> http://mails.dpdk.org/archives/dev/2018-July/108573.html
> 
> But DPDK builds successfully by itself.  Any suggestions where the build 
> is breaking down?

Hi,
I sent a patch for this problem
(https://patchwork.ozlabs.org/patch/1013466/).

Can you try (and maybe ack) it?

NOTE: This problem is only present if you link with DPDK as static
library, since with shared library the linker doesn't need to know the
full list of indirect dependencies.

Thank you

> Dave
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

2018-12-15 Thread Olga Shern
> But DPDK builds successfully by itself.  Any suggestions where the build is 
> breaking down?

What do you mean?

The question whether Mellanox PMD is compiled.  If it is compiled than  libnl 
is needed 

Thanks,
Olga


-Original Message-
From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of David Christensen
Sent: Friday, December 14, 2018 2:54 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

Attempting to use DPDK 18.11 with Monday's OVS commit that supports DPDK
18.11 (commit 03f3f9c0faf838a8506c3b5ce6199af401d13cb3).  When building OVS 
with DPDK support I'm receiving a build error related to libmnl not being found 
while compiling the Mellanox driver as follows:

...
gcc -std=gnu99 -DHAVE_CONFIG_H -I.-I ./include -I ./include -I ./lib 
-I ./lib-Wstrict-prototypes -Wall -Wextra -Wno-sign-compare 
-Wpointer-arith -Wformat -Wformat-security -Wswitch-enum -Wunused-parameter 
-Wbad-function-cast -Wcast-align -Wstrict-prototypes -Wold-style-definition 
-Wmissing-prototypes -Wmissing-field-initializers -fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64  -g -O2 -MT vswitchd/xenserver.o -MD -MP -MF 
$depbase.Tpo -c -o vswitchd/xenserver.o vswitchd/xenserver.c &&\ mv -f 
$depbase.Tpo $depbase.Po
/bin/sh ./libtool  --tag=CC   --mode=link gcc -std=gnu99 
-Wstrict-prototypes -Wall -Wextra -Wno-sign-compare -Wpointer-arith -Wformat 
-Wformat-security -Wswitch-enum -Wunused-parameter -Wbad-function-cast 
-Wcast-align -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
-Wmissing-field-initializers -fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64  -g -O2
-L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib
-Wl,--whole-archive,-ldpdk,--no-whole-archive  -o vswitchd/ovs-vswitchd 
vswitchd/bridge.o vswitchd/ovs-vswitchd.o vswitchd/system-stats.o 
vswitchd/xenserver.o ofproto/libofproto.la lib/libsflow.la 
lib/libopenvswitch.la -ldpdk -ldl -lnuma -latomic -lpthread -lrt -lm  -lnuma
libtool: link: gcc -std=gnu99 -Wstrict-prototypes -Wall -Wextra 
-Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security -Wswitch-enum 
-Wunused-parameter -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
-Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
-fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64 -g -O2 -Wl,--whole-archive -Wl,-ldpdk 
-Wl,--no-whole-archive -o vswitchd/ovs-vswitchd vswitchd/bridge.o 
vswitchd/ovs-vswitchd.o vswitchd/system-stats.o vswitchd/xenserver.o 
-L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib
ofproto/.libs/libofproto.a
/home/davec/src/p9-dpdk-perf/ovs/lib/.libs/libsflow.a
lib/.libs/libsflow.a lib/.libs/libopenvswitch.a -ldpdk -ldl -latomic -lpthread 
-lrt -lm -lnuma
/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib/librte_pmd_mlx5.a(mlx5_flow_tcf.o):
 
In function `flow_tcf_nl_ack':
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3753: 
undefined reference to `mnl_socket_get_portid'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3765: 
undefined reference to `mnl_socket_sendto'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
undefined reference to `mnl_socket_recvfrom'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3790: 
undefined reference to `mnl_cb_run'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
undefined reference to `mnl_socket_recvfrom'
...

Both the OVS and DPDK builds work individually but I receive the error after 
running "./configure --with-dpdk=; make" to build OVS with 
DPDK.  I ran across this post on the DPDK list regarding libmnl, indicating 
there is a dependency issue:

https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2018-July%2F108573.htmldata=02%7C01%7Colgas%40mellanox.com%7C0b38efee651e4b4bece108d6615ea5fc%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C636803456486835522sdata=OG4IDtWXlZBr5l4EjGn3SOASBBnO7XPX%2BFIETQVImwY%3Dreserved=0

But DPDK builds successfully by itself.  Any suggestions where the build is 
breaking down?

Dave

___
discuss mailing list
disc...@openvswitch.org
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discussdata=02%7C01%7Colgas%40mellanox.com%7C0b38efee651e4b4bece108d6615ea5fc%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C636803456486835522sdata=EQSoz6%2Bs2iJvy%2BeTiOdjz06v2Riki%2FaCMapbgUcxcHk%3Dreserved=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-11-29 Thread Lam, Tiago
On 29/11/2018 08:24, LIU Yulong wrote:
> Hi,
> 
> We recently tested ovs-dpdk, but we met some bandwidth issue. The bandwidth
> from VM to VM was not close to the physical NIC, it's about 4.3Gbps on a
> 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test can easily
> reach 9.3Gbps. We enabled the virtio multiqueue for all guest VMs. In the
> dpdk vhostuser guest, we noticed that the interrupts are centralized to
> only one queue. But for no dpdk VM, interrupts can hash to all queues.
> For those dpdk vhostuser VMs, we also noticed that the PMD usages were
> also centralized to one no matter server(tx) or client(rx). And no matter
> one PMD or multiple PMDs, this behavior always exists.
> 
> Furthuremore, my colleague add some systemtap hook on the openvswitch
> function, he found something interesting. The function
> __netdev_dpdk_vhost_send will send all the packets to one virtionet-queue.
> Seems that there are some algorithm/hash table/logic does not do the hash
> very well. 
> 

Hi,

When you say "no dpdk VMs", you mean that within your VM you're relying
on the Kernel to get the packets, using virtio-net. And when you say
"dpdk vhostuser guest", you mean you're using DPDK inside the VM to get
the packets. Is this correct?

If so, could you also tell us which DPDK app you're using inside of
those VMs? Is it testpmd? If so, how are you setting the `--rxq` and
`--txq` args? Otherwise, how are you setting those in your app when
initializing DPDK?

The information below is useful in telling us how you're setting your
configurations in OvS, but we are still missing the configurations
inside the VM.

This should help us in getting more information,

Tiago.

> So I'd like to find some help from the community. Maybe I'm missing some
> configrations.
> 
> Thanks.
> 
> 
> Here is the list of the environment and some configrations:
> # uname -r
> 3.10.0-862.11.6.el7.x86_64
> # rpm -qa|grep dpdk
> dpdk-17.11-11.el7.x86_64
> # rpm -qa|grep openvswitch
> openvswitch-2.9.0-3.el7.x86_64
> # ovs-vsctl list open_vswitch
> _uuid               : a6a3d9eb-28a8-4bf0-a8b4-94577b5ffe5e
> bridges             : [531e4bea-ce12-402a-8a07-7074c31b978e,
> 5c1675e2-5408-4c1f-88bc-6d9c9b932d47]
> cur_cfg             : 1305
> datapath_types      : [netdev, system]
> db_version          : "7.15.1"
> external_ids        : {hostname="cq01-compute-10e112e5e140",
> rundir="/var/run/openvswitch",
> system-id="e2cc84fe-a3c8-455f-8c64-260741c141ee"}
> iface_types         : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient,
> geneve, gre, internal, lisp, patch, stt, system, tap, vxlan]
> manager_options     : [43803994-272b-49cb-accc-ab672d1eefc8]
> next_cfg            : 1305
> other_config        : {dpdk-init="true", dpdk-lcore-mask="0x1",
> dpdk-socket-mem="1024,1024", pmd-cpu-mask="0x10",
> vhost-iommu-support="true"}
> ovs_version         : "2.9.0"
> ssl                 : []
> statistics          : {}
> system_type         : centos
> system_version      : "7"
> # lsmod |grep vfio
> vfio_pci               41312  2 
> vfio_iommu_type1       22300  1 
> vfio                   32695  7 vfio_iommu_type1,vfio_pci
> irqbypass              13503  23 kvm,vfio_pci
> 
> # ovs-appctl dpif/show
> netdev@ovs-netdev: hit:759366335 missed:754283
> br-ex:
> bond1108 4/6: (tap)
> br-ex 65534/3: (tap)
> nic-10G-1 5/4: (dpdk: configured_rx_queues=8,
> configured_rxq_descriptors=2048, configured_tx_queues=2,
> configured_txq_descriptors=2048, mtu=1500, requested_rx_queues=8,
> requested_rxq_descriptors=2048, requested_tx_queues=2,
> requested_txq_descriptors=2048, rx_csum_offload=true)
> nic-10G-2 6/5: (dpdk: configured_rx_queues=8,
> configured_rxq_descriptors=2048, configured_tx_queues=2,
> configured_txq_descriptors=2048, mtu=1500, requested_rx_queues=8,
> requested_rxq_descriptors=2048, requested_tx_queues=2,
> requested_txq_descriptors=2048, rx_csum_offload=true)
> phy-br-ex 3/none: (patch: peer=int-br-ex)
> br-int:
> br-int 65534/2: (tap)
> int-br-ex 1/none: (patch: peer=phy-br-ex)
> vhu76f9a623-9f 2/1: (dpdkvhostuserclient: configured_rx_queues=8,
> configured_tx_queues=8, mtu=1500, requested_rx_queues=8,
> requested_tx_queues=8)
> 
> # ovs-appctl dpctl/show -s
> netdev@ovs-netdev:
> lookups: hit:759366335 missed:754283 lost:72
> flows: 186
> port 0: ovs-netdev (tap)
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:0 errors:0 dropped:0 aborted:0 carrier:0
> collisions:0
> RX bytes:0  TX bytes:0
> port 1: vhu76f9a623-9f (dpdkvhostuserclient: configured_rx_queues=8,
> configured_tx_queues=8, mtu=1500, requested_rx_queues=8,
> requested_tx_queues=8)
> RX packets:718391758 errors:0 dropped:0 overruns:? frame:?
> TX packets:30372410 errors:? dropped:719200 aborted:? carrier:?
> collisions:?
> RX bytes:1086995317051 (1012.3 GiB)  TX bytes:2024893540 (1.9 GiB)
> port 2: br-int (tap)
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:1393992 errors:0 dropped:4 aborted:0 carrier:0
> collisions:0
> RX bytes:0  TX 

Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-11-27 Thread Onkar Pednekar
Hi,

I managed to solve this performance issue. I got improved performance after
turning off the mrg_rxbuf and increasing the rx and tx queue sizes to 1024.

Thanks,
Onkar

On Thu, Nov 8, 2018 at 2:57 PM Onkar Pednekar  wrote:

> Hi,
>
> We figured out that the packet processing appliance within VM (which reads
> from raw socket on the dpdk vhost user interface) requires more packets per
> second to give higher throughput. Else its cpu utilization is idle most of
> the times.
>
> We increased the "tx-flush-interval" from default 0 to 500 and the
> throughput increased from 300 mbps to 600 mbps (but we expect 1G). Also, we
> saw that the PPS on the VM RX interface increased from 35 kpps to 68 kpps.
> Higher values of "tx-flush-interval" doesn't help.
>
> Also disabling mgr_rxbuf seems to give better performance, i.e.
> virtio-net-pci.mgr_rx_buf=off in qemu. But still the pps are around 65 k on
> the VM dpdk vhostuser interface RX and the throughput below 700 mbps.
>
> *Are there any other parameters that can be tuned to increase the amount
> of packets per second forwarded from phy dpdk interface to the dpdk
> vhostuser interface inside the VM?*
>
> Thanks,
> Onkar
>
> On Fri, Oct 5, 2018 at 1:45 PM Onkar Pednekar  wrote:
>
>> Hi Tiago,
>>
>> Sure. I'll try that.
>>
>> Thanks,
>> Onkar
>>
>> On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago  wrote:
>>
>>> Hi Onkar,
>>>
>>> Thanks for shedding some light.
>>>
>>> I don't think your difference in performance will have to do your
>>> OvS-DPDK setup. If you're taking the measurements directly from the
>>> iperf server side you'd be going through the "Internet". Assuming you
>>> don't have a dedicated connection there, things like your connection's
>>> bandwidth, the RTT from end to end start to matter considerably,
>>> specially for TCP.
>>>
>>> To get to the bottom of it I'd advise you to take the iperf server and
>>> connect it directly to the first machine (Machine 1). You would be
>>> excluding any "Internet" interference and be able to get the performance
>>> of a pvp scenario first.
>>>
>>> Assuming you're using kernel forwarding inside the VMs, if you want to
>>> squeeze in the extra performance it is probably wise to use DPDK testpmd
>>> to forward the traffic inside of the VMs as well, as explained here:
>>>
>>> http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback
>>>
>>> Regards,
>>> Tiago.
>>>
>>> On 04/10/2018 21:06, Onkar Pednekar wrote:
>>> > Hi Tiago,
>>> >
>>> > Thanks for your reply.
>>> >
>>> > Below are the answers to your questions in-line.
>>> >
>>> >
>>> > On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago >> > > wrote:
>>> >
>>> > Hi Onkar,
>>> >
>>> > Thanks for your email. Your setup isn't very clear to me, so a few
>>> > queries in-line.
>>> >
>>> > On 04/10/2018 06:06, Onkar Pednekar wrote:
>>> > > Hi,
>>> > >
>>> > > I have been experimenting with OVS DPDK on 1G interfaces. The
>>> > system has
>>> > > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
>>> > ports,
>>> > > but the data traffic runs only on dpdk ports.
>>> > >
>>> > > DPDK ports are backed by vhost user netdev and I have configured
>>> the
>>> > > system so that hugepages are enabled, CPU cores isolated with PMD
>>> > > threads allocated to them and also pinning the VCPUs.>
>>> > > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
>>> > with <
>>> > > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>>> > > thoughput. I see that setting generic receive offload to off
>>> > helps, but
>>> > > still the TCP thpt is very less compared to the nic capabilities.
>>> > I know
>>> > > that there will be some performance degradation for TCP as
>>> against UDP
>>> > > but this is way below expected.
>>> > >
>>> >
>>> > When transmitting traffic between the DPDK ports, what are the
>>> flows you
>>> > have setup? Does it follow a p2p or pvp setup? In other words,
>>> does the
>>> > traffic flow between the VM and the physical ports, or only between
>>> > physical ports?
>>> >
>>> >
>>> >  The traffic is between the VM and the physical ports.
>>> >
>>> >
>>> > > I don't see any packets dropped for tcp on the internal VM
>>> (virtual)
>>> > > interfaces.
>>> > >
>>> > > I would like to know if there is an settings (offloads) for the
>>> > > interfaces or any other config I might be missing.
>>> >
>>> > What is the MTU set on the DPDK ports? Both physical and
>>> vhost-user?
>>> >
>>> > $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
>>> >
>>> >
>>> > MTU set on physical ports = 2000
>>> > MTU set on vhostuser ports = 1500
>>> >
>>> >
>>> > This will help to clarify some doubts around your setup first.
>>> >
>>> > Tiago.
>>> >
>>> > >
>>> > > Thanks,
>>> > > Onkar
>>> > >
>>> > >
>>> > > 

Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-11-08 Thread Onkar Pednekar
Hi,

We figured out that the packet processing appliance within VM (which reads
from raw socket on the dpdk vhost user interface) requires more packets per
second to give higher throughput. Else its cpu utilization is idle most of
the times.

We increased the "tx-flush-interval" from default 0 to 500 and the
throughput increased from 300 mbps to 600 mbps (but we expect 1G). Also, we
saw that the PPS on the VM RX interface increased from 35 kpps to 68 kpps.
Higher values of "tx-flush-interval" doesn't help.

Also disabling mgr_rxbuf seems to give better performance, i.e.
virtio-net-pci.mgr_rx_buf=off in qemu. But still the pps are around 65 k on
the VM dpdk vhostuser interface RX and the throughput below 700 mbps.

*Are there any other parameters that can be tuned to increase the amount of
packets per second forwarded from phy dpdk interface to the dpdk vhostuser
interface inside the VM?*

Thanks,
Onkar

On Fri, Oct 5, 2018 at 1:45 PM Onkar Pednekar  wrote:

> Hi Tiago,
>
> Sure. I'll try that.
>
> Thanks,
> Onkar
>
> On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago  wrote:
>
>> Hi Onkar,
>>
>> Thanks for shedding some light.
>>
>> I don't think your difference in performance will have to do your
>> OvS-DPDK setup. If you're taking the measurements directly from the
>> iperf server side you'd be going through the "Internet". Assuming you
>> don't have a dedicated connection there, things like your connection's
>> bandwidth, the RTT from end to end start to matter considerably,
>> specially for TCP.
>>
>> To get to the bottom of it I'd advise you to take the iperf server and
>> connect it directly to the first machine (Machine 1). You would be
>> excluding any "Internet" interference and be able to get the performance
>> of a pvp scenario first.
>>
>> Assuming you're using kernel forwarding inside the VMs, if you want to
>> squeeze in the extra performance it is probably wise to use DPDK testpmd
>> to forward the traffic inside of the VMs as well, as explained here:
>>
>> http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback
>>
>> Regards,
>> Tiago.
>>
>> On 04/10/2018 21:06, Onkar Pednekar wrote:
>> > Hi Tiago,
>> >
>> > Thanks for your reply.
>> >
>> > Below are the answers to your questions in-line.
>> >
>> >
>> > On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago > > > wrote:
>> >
>> > Hi Onkar,
>> >
>> > Thanks for your email. Your setup isn't very clear to me, so a few
>> > queries in-line.
>> >
>> > On 04/10/2018 06:06, Onkar Pednekar wrote:
>> > > Hi,
>> > >
>> > > I have been experimenting with OVS DPDK on 1G interfaces. The
>> > system has
>> > > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
>> > ports,
>> > > but the data traffic runs only on dpdk ports.
>> > >
>> > > DPDK ports are backed by vhost user netdev and I have configured
>> the
>> > > system so that hugepages are enabled, CPU cores isolated with PMD
>> > > threads allocated to them and also pinning the VCPUs.>
>> > > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
>> > with <
>> > > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>> > > thoughput. I see that setting generic receive offload to off
>> > helps, but
>> > > still the TCP thpt is very less compared to the nic capabilities.
>> > I know
>> > > that there will be some performance degradation for TCP as
>> against UDP
>> > > but this is way below expected.
>> > >
>> >
>> > When transmitting traffic between the DPDK ports, what are the
>> flows you
>> > have setup? Does it follow a p2p or pvp setup? In other words, does
>> the
>> > traffic flow between the VM and the physical ports, or only between
>> > physical ports?
>> >
>> >
>> >  The traffic is between the VM and the physical ports.
>> >
>> >
>> > > I don't see any packets dropped for tcp on the internal VM
>> (virtual)
>> > > interfaces.
>> > >
>> > > I would like to know if there is an settings (offloads) for the
>> > > interfaces or any other config I might be missing.
>> >
>> > What is the MTU set on the DPDK ports? Both physical and vhost-user?
>> >
>> > $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
>> >
>> >
>> > MTU set on physical ports = 2000
>> > MTU set on vhostuser ports = 1500
>> >
>> >
>> > This will help to clarify some doubts around your setup first.
>> >
>> > Tiago.
>> >
>> > >
>> > > Thanks,
>> > > Onkar
>> > >
>> > >
>> > > ___
>> > > discuss mailing list
>> > > disc...@openvswitch.org 
>> > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> > >
>> >
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK + DPDK-APP-with SRIOV

2018-10-25 Thread Ian Stokes

On 10/25/2018 3:10 AM, pratik maru wrote:

Hello,

I have a compute node installed with OVS+DPDK, and I am running a WRL 
Guest VM which also has DPDK app running inside it.


This Guest VM has two interfaces - 1 VirtIO (mgmt) + 1 SRIOV (used by 
DPDK app). I am observing that as soon as my DPDK app starts, my mgmt 
interfaces goes down.


Any idea what could be the problem? Also, is it s known issue?



I haven't come across the issue myself. Can you provide more information 
with regards the OVS version, DPDK version (both host and guest), what 
type of device is the SRIOV interface attached to in the host?


Are there any error or warnings in the OVS logs or DPDK logs? (Again for 
both OVS DPDK on the host and the DPDK app in the guest)


Ian

Thanks
Pratik


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-05 Thread Onkar Pednekar
Hi Tiago,

Sure. I'll try that.

Thanks,
Onkar

On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago  wrote:

> Hi Onkar,
>
> Thanks for shedding some light.
>
> I don't think your difference in performance will have to do your
> OvS-DPDK setup. If you're taking the measurements directly from the
> iperf server side you'd be going through the "Internet". Assuming you
> don't have a dedicated connection there, things like your connection's
> bandwidth, the RTT from end to end start to matter considerably,
> specially for TCP.
>
> To get to the bottom of it I'd advise you to take the iperf server and
> connect it directly to the first machine (Machine 1). You would be
> excluding any "Internet" interference and be able to get the performance
> of a pvp scenario first.
>
> Assuming you're using kernel forwarding inside the VMs, if you want to
> squeeze in the extra performance it is probably wise to use DPDK testpmd
> to forward the traffic inside of the VMs as well, as explained here:
> http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback
>
> Regards,
> Tiago.
>
> On 04/10/2018 21:06, Onkar Pednekar wrote:
> > Hi Tiago,
> >
> > Thanks for your reply.
> >
> > Below are the answers to your questions in-line.
> >
> >
> > On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago  > > wrote:
> >
> > Hi Onkar,
> >
> > Thanks for your email. Your setup isn't very clear to me, so a few
> > queries in-line.
> >
> > On 04/10/2018 06:06, Onkar Pednekar wrote:
> > > Hi,
> > >
> > > I have been experimenting with OVS DPDK on 1G interfaces. The
> > system has
> > > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
> > ports,
> > > but the data traffic runs only on dpdk ports.
> > >
> > > DPDK ports are backed by vhost user netdev and I have configured
> the
> > > system so that hugepages are enabled, CPU cores isolated with PMD
> > > threads allocated to them and also pinning the VCPUs.>
> > > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
> > with <
> > > 1% packet loss. However, with tcp traffic, I see around 300Mbps
> > > thoughput. I see that setting generic receive offload to off
> > helps, but
> > > still the TCP thpt is very less compared to the nic capabilities.
> > I know
> > > that there will be some performance degradation for TCP as against
> UDP
> > > but this is way below expected.
> > >
> >
> > When transmitting traffic between the DPDK ports, what are the flows
> you
> > have setup? Does it follow a p2p or pvp setup? In other words, does
> the
> > traffic flow between the VM and the physical ports, or only between
> > physical ports?
> >
> >
> >  The traffic is between the VM and the physical ports.
> >
> >
> > > I don't see any packets dropped for tcp on the internal VM
> (virtual)
> > > interfaces.
> > >
> > > I would like to know if there is an settings (offloads) for the
> > > interfaces or any other config I might be missing.
> >
> > What is the MTU set on the DPDK ports? Both physical and vhost-user?
> >
> > $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
> >
> >
> > MTU set on physical ports = 2000
> > MTU set on vhostuser ports = 1500
> >
> >
> > This will help to clarify some doubts around your setup first.
> >
> > Tiago.
> >
> > >
> > > Thanks,
> > > Onkar
> > >
> > >
> > > ___
> > > discuss mailing list
> > > disc...@openvswitch.org 
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> > >
> >
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-05 Thread Lam, Tiago
Hi Onkar,

Thanks for shedding some light.

I don't think your difference in performance will have to do your
OvS-DPDK setup. If you're taking the measurements directly from the
iperf server side you'd be going through the "Internet". Assuming you
don't have a dedicated connection there, things like your connection's
bandwidth, the RTT from end to end start to matter considerably,
specially for TCP.

To get to the bottom of it I'd advise you to take the iperf server and
connect it directly to the first machine (Machine 1). You would be
excluding any "Internet" interference and be able to get the performance
of a pvp scenario first.

Assuming you're using kernel forwarding inside the VMs, if you want to
squeeze in the extra performance it is probably wise to use DPDK testpmd
to forward the traffic inside of the VMs as well, as explained here:
http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback

Regards,
Tiago.

On 04/10/2018 21:06, Onkar Pednekar wrote:
> Hi Tiago,
> 
> Thanks for your reply.
> 
> Below are the answers to your questions in-line.
> 
> 
> On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago  > wrote:
> 
> Hi Onkar,
> 
> Thanks for your email. Your setup isn't very clear to me, so a few
> queries in-line.
> 
> On 04/10/2018 06:06, Onkar Pednekar wrote:
> > Hi,
> >
> > I have been experimenting with OVS DPDK on 1G interfaces. The
> system has
> > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
> ports,
> > but the data traffic runs only on dpdk ports.
> >
> > DPDK ports are backed by vhost user netdev and I have configured the
> > system so that hugepages are enabled, CPU cores isolated with PMD
> > threads allocated to them and also pinning the VCPUs.>
> > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
> with <
> > 1% packet loss. However, with tcp traffic, I see around 300Mbps
> > thoughput. I see that setting generic receive offload to off
> helps, but
> > still the TCP thpt is very less compared to the nic capabilities.
> I know
> > that there will be some performance degradation for TCP as against UDP
> > but this is way below expected.
> >
> 
> When transmitting traffic between the DPDK ports, what are the flows you
> have setup? Does it follow a p2p or pvp setup? In other words, does the
> traffic flow between the VM and the physical ports, or only between
> physical ports?
> 
>  
>  The traffic is between the VM and the physical ports.
> 
> 
> > I don't see any packets dropped for tcp on the internal VM (virtual)
> > interfaces.
> >
> > I would like to know if there is an settings (offloads) for the
> > interfaces or any other config I might be missing.
> 
> What is the MTU set on the DPDK ports? Both physical and vhost-user?
> 
> $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
> 
> 
> MTU set on physical ports = 2000 
> MTU set on vhostuser ports = 1500
> 
> 
> This will help to clarify some doubts around your setup first.
> 
> Tiago.
> 
> >
> > Thanks,
> > Onkar
> >
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org 
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-04 Thread Onkar Pednekar
Hi Michael,

Thanks for your reply. Below are the answers to your questions inline.

On Thu, Oct 4, 2018 at 8:01 AM Michael Richardson  wrote:

>
> Onkar Pednekar  wrote:
> > I have been experimenting with OVS DPDK on 1G interfaces. The system
> > has 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
> > ports, but the data traffic runs only on dpdk ports.
>
> > DPDK ports are backed by vhost user netdev and I have configured the
> > system so that hugepages are enabled, CPU cores isolated with PMD
> > threads allocated to them and also pinning the VCPUs.
>
> > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
> with <
> > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>
> What size packet?
>
UDP packet size = 1300 bytes
TCP packet size = default packet size using iperf, the physical dpdk port
MTU is 2000, dpdk vhost user port MTU is 1500, also the client running
iperf alos has 1500 mtu interface, so I guess the packet size would be 1500
for TCP.

What's your real pps?
>
UDP: dpdk and dpdkvhost user interfaces show ~ 803Kpps
TCP: dpdk and dpdkvhost user interfaces show ~ 225Kpps

What do you do for test traffic?
>
Client and server machines with iperf for generation tcp and udp traffic
with 6 client threads running in parallel (-P 6 option with iperf)
Client Commands:
TCP: iperf -c  -P 6 -t 90
UDP: iperf -c  -u -l1300 -b 180M -P 6 -t 90

What is your latency?  Are there queues full?
>
How can I check this?


> Are you layer-2 switching or layer-3 routing, or something exotic?
>
OVS contains mix of l2 and l3 flows, but the (tcp/udp) traffic path uses l2
switching.

>
> > thoughput. I see that setting generic receive offload to off helps,
> but
> > still the TCP thpt is very less compared to the nic capabilities.  I
> > know that there will be some performance degradation for TCP as
> against
> > UDP but this is way below expected.
>
> Receive offload should only help if you are terminating the TCP flows.
> I could well see that it would affect a switching situation significantly.
> What are you using for TCP flow generation?  Are you running real TCP
> stacks with window calculations and back-off, etc?  Is your offered load
> actually going up?
>
I am using iperf to generate traffic between client and server with stable
workload.

>
> > I don't see any packets dropped for tcp on the internal VM (virtual)
> > interfaces.
>
> ?virtual?
> I don't understand: do you have senders/receivers on the machine under
> test?
>
By virtual i meant the dpdk vhost user interfaces. iperf is running on
client and server machines external to the machine (with ovs) under test.

Topology:
[image: image.png]


>
> --
> ]   Never tell me the odds! | ipv6 mesh
> networks [
> ]   Michael Richardson, Sandelman Software Works| network
> architect  [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on
> rails[
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-04 Thread Onkar Pednekar
Hi Tiago,

Thanks for your reply.

Below are the answers to your questions in-line.


On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago  wrote:

> Hi Onkar,
>
> Thanks for your email. Your setup isn't very clear to me, so a few
> queries in-line.
>
> On 04/10/2018 06:06, Onkar Pednekar wrote:
> > Hi,
> >
> > I have been experimenting with OVS DPDK on 1G interfaces. The system has
> > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable ports,
> > but the data traffic runs only on dpdk ports.
> >
> > DPDK ports are backed by vhost user netdev and I have configured the
> > system so that hugepages are enabled, CPU cores isolated with PMD
> > threads allocated to them and also pinning the VCPUs.>
> > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces with <
> > 1% packet loss. However, with tcp traffic, I see around 300Mbps
> > thoughput. I see that setting generic receive offload to off helps, but
> > still the TCP thpt is very less compared to the nic capabilities. I know
> > that there will be some performance degradation for TCP as against UDP
> > but this is way below expected.
> >
>
> When transmitting traffic between the DPDK ports, what are the flows you
> have setup? Does it follow a p2p or pvp setup? In other words, does the
> traffic flow between the VM and the physical ports, or only between
> physical ports?
>

 The traffic is between the VM and the physical ports.


> > I don't see any packets dropped for tcp on the internal VM (virtual)
> > interfaces.
> >
> > I would like to know if there is an settings (offloads) for the
> > interfaces or any other config I might be missing.
>
> What is the MTU set on the DPDK ports? Both physical and vhost-user?
>
> $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
>

MTU set on physical ports = 2000
MTU set on vhostuser ports = 1500


> This will help to clarify some doubts around your setup first.
>
> Tiago.
>
> >
> > Thanks,
> > Onkar
> >
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-04 Thread Michael Richardson

Onkar Pednekar  wrote:
> I have been experimenting with OVS DPDK on 1G interfaces. The system
> has 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
> ports, but the data traffic runs only on dpdk ports.

> DPDK ports are backed by vhost user netdev and I have configured the
> system so that hugepages are enabled, CPU cores isolated with PMD
> threads allocated to them and also pinning the VCPUs.

> When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces with <
> 1% packet loss. However, with tcp traffic, I see around 300Mbps

What size packet?
What's your real pps?
What do you do for test traffic?
What is your latency?  Are there queues full?
Are you layer-2 switching or layer-3 routing, or something exotic?

> thoughput. I see that setting generic receive offload to off helps, but
> still the TCP thpt is very less compared to the nic capabilities.  I
> know that there will be some performance degradation for TCP as against
> UDP but this is way below expected.

Receive offload should only help if you are terminating the TCP flows.
I could well see that it would affect a switching situation significantly.
What are you using for TCP flow generation?  Are you running real TCP
stacks with window calculations and back-off, etc?  Is your offered load
actually going up?

> I don't see any packets dropped for tcp on the internal VM (virtual)
> interfaces.

?virtual?
I don't understand: do you have senders/receivers on the machine under test?

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works| network architect  [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[




signature.asc
Description: PGP signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-10-04 Thread Lam, Tiago
Hi Onkar,

Thanks for your email. Your setup isn't very clear to me, so a few
queries in-line.

On 04/10/2018 06:06, Onkar Pednekar wrote:
> Hi,
> 
> I have been experimenting with OVS DPDK on 1G interfaces. The system has
> 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable ports,
> but the data traffic runs only on dpdk ports.
> 
> DPDK ports are backed by vhost user netdev and I have configured the
> system so that hugepages are enabled, CPU cores isolated with PMD
> threads allocated to them and also pinning the VCPUs.>
> When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces with <
> 1% packet loss. However, with tcp traffic, I see around 300Mbps
> thoughput. I see that setting generic receive offload to off helps, but
> still the TCP thpt is very less compared to the nic capabilities. I know
> that there will be some performance degradation for TCP as against UDP
> but this is way below expected.
> 

When transmitting traffic between the DPDK ports, what are the flows you
have setup? Does it follow a p2p or pvp setup? In other words, does the
traffic flow between the VM and the physical ports, or only between
physical ports?

> I don't see any packets dropped for tcp on the internal VM (virtual)
> interfaces.
> 
> I would like to know if there is an settings (offloads) for the
> interfaces or any other config I might be missing.

What is the MTU set on the DPDK ports? Both physical and vhost-user?

$ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu

This will help to clarify some doubts around your setup first.

Tiago.

> 
> Thanks,
> Onkar
> 
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk TSO offload feather

2018-09-12 Thread Ian Stokes

On 9/12/2018 7:02 AM, multi_task wrote:

Hi,

What is the current state about the ovs-dpdk TSO offload feature? Is 
this planned any time soon?




There is ongoing work to enable TSO for OVS with DPDK.

There are patches available that are under review currently but have not 
been upstreamed to master yet.


If you are interested in testing you will need to apply the multi 
segment patch-set first available at the link below.


https://patchwork.ozlabs.org/project/openvswitch/list/?series=62384

This is a requirement for enabling TSO.

After applying the multi segment series you should then apply the TSO 
patches availalbe in the link below (Note this patch series is RFC so is 
still under review and testing by the community).


https://patchwork.ozlabs.org/project/openvswitch/list/?series=59913

Any feedback is welcome.

Thanks
Ian


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

2018-08-21 Thread 张广明
size=size@entry=6272,
> align=align@entry=0) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c:135
> #5  0x006bec48 in vhost_new_device () at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/vhost.c:311
> #6  0x006bd685 in vhost_user_add_connection (fd=fd@entry=66,
> vsocket=vsocket@entry=0x1197560) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:224
> #7  0x006bdbf6 in vhost_user_server_new_connection (fd=66, 
> fd@entry=54,
> dat=dat@entry=0x1197560, remove=remove@entry=0x7fbbafffe9dc) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:284
> #8  0x006bc48c in fdset_event_dispatch (arg=0xc1ace0
> ) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/fd_man.c:308
> #9  0x7fbc450fee25 in start_thread () from /usr/lib64/libpthread.so.0
> #10 0x7fbc446e134d in clone () from /usr/lib64/libc.so.6
> (gdb) fr 0
> #0  0x00443c9c in find_suitable_element (bound=0, align=64,
> flags=0, size=6272, heap=0x7fbc461f2a1c) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:134
> 134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz))
> (gdb) p elem->ms
> $1 = (const struct rte_memseg *) 0x7fa4f3ebb01c
> (gdb) p *elem->ms
> Cannot access memory at address 0x7fa4f3ebb01c
> (gdb) p *elem
> $2 = {heap = 0x7fa4f3eeda1c, prev = 0x0, free_list = {le_next = 0x0,
> le_prev = 0x7fa4f3eeda7c}, ms = 0x7fa4f3ebb01c, state = ELEM_FREE, pad = 0,
> size = 1073439232}
> (gdb)  disassemble 0x00443c9c
> Dump of assembler code for function malloc_heap_alloc:
> => 0x00443c9c <+156>: mov0x18(%rax),%rax
>0x00443ca0 <+160>: test   %r15d,%r15d
>0x00443ca3 <+163>: je 0x443d7c 
>0x00443ca9 <+169>: cmp$0x1000,%rax
>0x00443caf <+175>: je 0x443d25 
> ---Type  to continue, or q  to quit---q
> Quit
> (gdb) info reg rax
> rax0x7fa4f3ebb01c 140346443673628
>
> Is  the   dpdk-socket-memtoo small ?
>
> Thanks
>
>
>
> O Mahony, Billy  于2018年8月21日周二 下午4:17写道:
>
>> Hi,
>>
>>
>>
>> One thing to look out for with DPDK < 18.05 is that you need to used 1GB
>> huge pages (and no more than eight of them) to use virtio. I’m not sure if
>> that is the issue you have as I think it I don’t remember it causing a seg
>> fault. But is certainly worth checking.
>>
>>
>>
>> If that does not work please send the info Ciara refers to as well as the
>> ovs-vsctl interface config for the ovs vhost backend.
>>
>>
>>
>> Thanks,
>>
>> Billy
>>
>>
>>
>> *From:* ovs-discuss-boun...@openvswitch.org [mailto:
>> ovs-discuss-boun...@openvswitch.org] *On Behalf Of *Loftus, Ciara
>> *Sent:* Tuesday, August 21, 2018 9:06 AM
>> *To:* gmzhan...@gmail.com; ovs-discuss@openvswitch.org
>> *Cc:* us...@dpdk.org
>> *Subject:* Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>>
>>
>>
>> Hi,
>>
>>
>>
>> I am cc-ing the DPDK users’ list as the SEGV originates in the DPDK vHost
>> code and somebody there might be able to help too.
>>
>> Could you provide more information about your environment please? eg. OVS
>> & DPDK versions, hugepage configuration, etc.
>>
>>
>>
>> Thanks,
>>
>> Ciara
>>
>>
>>
>> *From:* ovs-discuss-boun...@openvswitch.org [
>> mailto:ovs-discuss-boun...@openvswitch.org
>> ] *On Behalf Of *???
>> *Sent:* Monday, August 20, 2018 12:06 PM
>> *To:* ovs-discuss@openvswitch.org
>> *Subject:* [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>>
>>
>>
>> Hi,
>>
>>
>>
>>I used ovs-dpdk  as bridge  and l2fwd  as container. When l2fwd was
>> runned ,the ovs-dpdk was crashed.
>>
>>
>>
>> My command is :
>>
>>
>>
>> docker run -it --privileged --name=dpdk-docker  -v
>> /dev/hugepages:/mnt/huge -v
>> /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker
>>
>> ./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci
>> --vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0
>>  
>> --vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1
>> -- -p 0x3
>>
>>
>>
>> The crash log
>>
>>
>>
>> Program terminated with signal 11, Segmentation fault.
>>
>> #0  0x00445828 in malloc_elem_alloc ()
>>
>> Missing separate debuginfos, use: debuginfo-install
>> glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
>> krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64
>> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64
>> libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64
>> numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64
>> pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64
>>
>> (gdb) bt
>>
>> #0  0x00445828 in malloc_elem_alloc ()
>>
>> #1  0x00445e5d in malloc_heap_alloc ()
>>
>> #2  0x00444c74 in rte_zmalloc ()
>>
>> #3  0x006c16bf in vhost_new_device ()
>>
>> #4  0x006bfaf4 in vhost_user_add_connection ()
>>
>> #5  0x006beb88 in fdset_event_dispatch ()
>>
>> #6  0x7f613b288e25 in start_thread () from /usr/lib64/libpthread.so.0
>>
>> #7  0x7f613a86b34d in clone () from /usr/lib64/libc.so.6
>>
>>
>>
>> My OVS  version is 2.9.1 , DPDK version is 17.11.3
>>
>>
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

2018-08-21 Thread 张广明
mzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/fd_man.c:308
#9  0x7fbc450fee25 in start_thread () from /usr/lib64/libpthread.so.0
#10 0x7fbc446e134d in clone () from /usr/lib64/libc.so.6
(gdb) fr 0
#0  0x00443c9c in find_suitable_element (bound=0, align=64,
flags=0, size=6272, heap=0x7fbc461f2a1c) at
/home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:134
134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz))
(gdb) p elem->ms
$1 = (const struct rte_memseg *) 0x7fa4f3ebb01c
(gdb) p *elem->ms
Cannot access memory at address 0x7fa4f3ebb01c
(gdb) p *elem
$2 = {heap = 0x7fa4f3eeda1c, prev = 0x0, free_list = {le_next = 0x0,
le_prev = 0x7fa4f3eeda7c}, ms = 0x7fa4f3ebb01c, state = ELEM_FREE, pad = 0,
size = 1073439232}
(gdb)  disassemble 0x00443c9c
Dump of assembler code for function malloc_heap_alloc:
=> 0x00443c9c <+156>: mov0x18(%rax),%rax
   0x00443ca0 <+160>: test   %r15d,%r15d
   0x00443ca3 <+163>: je 0x443d7c 
   0x00443ca9 <+169>: cmp$0x1000,%rax
   0x00443caf <+175>: je 0x443d25 
---Type  to continue, or q  to quit---q
Quit
(gdb) info reg rax
rax0x7fa4f3ebb01c 140346443673628

Is  the   dpdk-socket-memtoo small ?

Thanks



O Mahony, Billy  于2018年8月21日周二 下午4:17写道:

> Hi,
>
>
>
> One thing to look out for with DPDK < 18.05 is that you need to used 1GB
> huge pages (and no more than eight of them) to use virtio. I’m not sure if
> that is the issue you have as I think it I don’t remember it causing a seg
> fault. But is certainly worth checking.
>
>
>
> If that does not work please send the info Ciara refers to as well as the
> ovs-vsctl interface config for the ovs vhost backend.
>
>
>
> Thanks,
>
> Billy
>
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [mailto:
> ovs-discuss-boun...@openvswitch.org] *On Behalf Of *Loftus, Ciara
> *Sent:* Tuesday, August 21, 2018 9:06 AM
> *To:* gmzhan...@gmail.com; ovs-discuss@openvswitch.org
> *Cc:* us...@dpdk.org
> *Subject:* Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>
>
>
> Hi,
>
>
>
> I am cc-ing the DPDK users’ list as the SEGV originates in the DPDK vHost
> code and somebody there might be able to help too.
>
> Could you provide more information about your environment please? eg. OVS
> & DPDK versions, hugepage configuration, etc.
>
>
>
> Thanks,
>
> Ciara
>
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [
> mailto:ovs-discuss-boun...@openvswitch.org
> ] *On Behalf Of *???
> *Sent:* Monday, August 20, 2018 12:06 PM
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>
>
>
> Hi,
>
>
>
>I used ovs-dpdk  as bridge  and l2fwd  as container. When l2fwd was
> runned ,the ovs-dpdk was crashed.
>
>
>
> My command is :
>
>
>
> docker run -it --privileged --name=dpdk-docker  -v
> /dev/hugepages:/mnt/huge -v
> /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker
>
> ./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci
> --vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0
>  
> --vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1
> -- -p 0x3
>
>
>
> The crash log
>
>
>
> Program terminated with signal 11, Segmentation fault.
>
> #0  0x00445828 in malloc_elem_alloc ()
>
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
> krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64
> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64
> libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64
> numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64
> pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64
>
> (gdb) bt
>
> #0  0x00445828 in malloc_elem_alloc ()
>
> #1  0x00445e5d in malloc_heap_alloc ()
>
> #2  0x00444c74 in rte_zmalloc ()
>
> #3  0x006c16bf in vhost_new_device ()
>
> #4  0x006bfaf4 in vhost_user_add_connection ()
>
> #5  0x006beb88 in fdset_event_dispatch ()
>
> #6  0x7f613b288e25 in start_thread () from /usr/lib64/libpthread.so.0
>
> #7  0x7f613a86b34d in clone () from /usr/lib64/libc.so.6
>
>
>
> My OVS  version is 2.9.1 , DPDK version is 17.11.3
>
>
>
>
>
> Thanks
>
>
>
>
>
>
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

2018-08-21 Thread O Mahony, Billy
Hi,

One thing to look out for with DPDK < 18.05 is that you need to used 1GB huge 
pages (and no more than eight of them) to use virtio. I’m not sure if that is 
the issue you have as I think it I don’t remember it causing a seg fault. But 
is certainly worth checking.

If that does not work please send the info Ciara refers to as well as the 
ovs-vsctl interface config for the ovs vhost backend.

Thanks,
Billy

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Loftus, Ciara
Sent: Tuesday, August 21, 2018 9:06 AM
To: gmzhan...@gmail.com; ovs-discuss@openvswitch.org
Cc: us...@dpdk.org
Subject: Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

Hi,

I am cc-ing the DPDK users’ list as the SEGV originates in the DPDK vHost code 
and somebody there might be able to help too.
Could you provide more information about your environment please? eg. OVS & 
DPDK versions, hugepage configuration, etc.

Thanks,
Ciara

From: 
ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org> 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of ???
Sent: Monday, August 20, 2018 12:06 PM
To: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

Hi,

   I used ovs-dpdk  as bridge  and l2fwd  as container. When l2fwd was runned 
,the ovs-dpdk was crashed.

My command is :

docker run -it --privileged --name=dpdk-docker  -v /dev/hugepages:/mnt/huge 
-v /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker

./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci 
--vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0
  
--vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1
 -- -p 0x3



The crash log



Program terminated with signal 11, Segmentation fault.

#0  0x00445828 in malloc_elem_alloc ()

Missing separate debuginfos, use: debuginfo-install 
glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 
krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 
libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64 
libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64 
numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64 
pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64

(gdb) bt

#0  0x00445828 in malloc_elem_alloc ()

#1  0x00445e5d in malloc_heap_alloc ()

#2  0x00444c74 in rte_zmalloc ()

#3  0x006c16bf in vhost_new_device ()

#4  0x006bfaf4 in vhost_user_add_connection ()

#5  0x006beb88 in fdset_event_dispatch ()

#6  0x7f613b288e25 in start_thread () from /usr/lib64/libpthread.so.0

#7  0x7f613a86b34d in clone () from /usr/lib64/libc.so.6



My OVS  version is 2.9.1 , DPDK version is 17.11.3





Thanks






___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK

2018-06-04 Thread Pedro Serrano
Flavio,

Thanks.  That's what I suspected.   I guess if too many packets that are
meant to be managed by DPDK get exposed to the kernel space via a TAP
interface then the speed gains would be lost.

I've also found that solutions to manage Layer 3 for DPDK devices are not
part of OVS.  Either OVN or some Layer 3 NOS compatible with OVS must be
used.

Regards

Pedro

Evolution - “go from nothing to something, by chance...” - Dr Raymond
Damadian, Inventor of the MRI
Full quote:
“If you want to insist that you know how creation came about without the
existence of our maker you have to explain how to violate all the laws of
physics and go from nothing to something, by chance.”

Pedro Serrano (send all replies to k...@gmx.us)

On Mon, May 28, 2018 at 9:55 AM, Flavio Leitner  wrote:

> On Fri, May 25, 2018 at 05:18:08PM -0400, Pedro Serrano wrote:
> > Greetings,
> >
> > My computer is setup using Ubuntu 18.04 with OVS 2.90 and dpdk 17.11.2-1.
> >  I'm attempting to create a few virtual ports (type dkdkvhostuserclient)
> on
> > my OVS bridge (datapath_type=netdev) and assign an IP addresses to them.
> >
> > ovs-vsctl add-br ovdkibr1 -- set bridge ovdkibr1 datapath_type=netdev
> > ovs-vsctl set Bridge ovdkibr1 stp_enable=false
> > ovs-vsctl add-port ovdkibr1 ovdkibr1p1 -- set Interface ovdkibr1p1
> > type=dpdkvhostuserclient mtu_request=8996
> > ovs-vsctl add-port ovdkibr1 ovdkibr1p2 -- set Interface ovdkibr1p2
> > type=dpdkvhostuserclient mtu_request=8996
> >
> > I've noticed that those virtual devices are not visible in the kernel
> space
> > (unlike type=tap or type=dpdk).   Since they are not visible to the
> kernel
> > they can't be configured using the "ip address add" command.
> >
> > Since the iproute2 tools can't see these devices, is there any way to
> > assign IP addresses, VLANs, etc to these devices?
>
> The bridge port is a tap device visible to the kernel and you could
> add more ports of type=internal which will be the same thing.
>
> However, we try to not mix the datapaths because it will slow down
> both if packets are crossing from one to another.
>
> --
> Flavio
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK

2018-05-28 Thread Flavio Leitner
On Fri, May 25, 2018 at 05:18:08PM -0400, Pedro Serrano wrote:
> Greetings,
> 
> My computer is setup using Ubuntu 18.04 with OVS 2.90 and dpdk 17.11.2-1.
>  I'm attempting to create a few virtual ports (type dkdkvhostuserclient) on
> my OVS bridge (datapath_type=netdev) and assign an IP addresses to them.
> 
> ovs-vsctl add-br ovdkibr1 -- set bridge ovdkibr1 datapath_type=netdev
> ovs-vsctl set Bridge ovdkibr1 stp_enable=false
> ovs-vsctl add-port ovdkibr1 ovdkibr1p1 -- set Interface ovdkibr1p1
> type=dpdkvhostuserclient mtu_request=8996
> ovs-vsctl add-port ovdkibr1 ovdkibr1p2 -- set Interface ovdkibr1p2
> type=dpdkvhostuserclient mtu_request=8996
> 
> I've noticed that those virtual devices are not visible in the kernel space
> (unlike type=tap or type=dpdk).   Since they are not visible to the kernel
> they can't be configured using the "ip address add" command.
> 
> Since the iproute2 tools can't see these devices, is there any way to
> assign IP addresses, VLANs, etc to these devices?

The bridge port is a tap device visible to the kernel and you could
add more ports of type=internal which will be the same thing.

However, we try to not mix the datapaths because it will slow down
both if packets are crossing from one to another.

-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK vm-vm performance

2018-05-22 Thread Krish
Pradeep

I have tried few tests by trying to turn on TSO , but  according to my
observation, ovs-dpdk doesn't support TSO Offloading.

You can try this reference from the following page:
http://dpdk-test-plans.readthedocs.io/en/latest/vhost_tso_test_plan.html

Thanks

On Tue, May 22, 2018 at 1:39 AM, Pradeep K.S 
wrote:

> Hi,
>
> I switched from OVS to OVS-DPDK and changed the VNFs to use vhost-user
> backend. I used iperf to compare the performance of vhost-net and
> vhost-user backend performance. Despite tuning on all fronts [more queues,
> more pmd threads chaining affinities, socket memory, increase cpus]  get
> the below results,
>
> 1) VM-VM: lower through put than vhost-net, lot less (1/4 of vhost-net)
> After googling found few links which pointed to TSO offload needed in
> OVS-DPDK,
> couldn't get option in OVS-DPDK to change that.
>
> 2) VM->PNIC->VM: Slightly better performance, not much.
>
> I can look further on tuning, Anyone faced the same issue and any pointers
> would be helpful.
>
>
>
> --
> Thanks and Regards,
> Pradeep.K.S.
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-23 Thread michael me
Hi Ian and everyone,

Thank you for clarifying, i was just trying to understand :)

my bad about the 1 queue, though, i changed it to two queues and still the
performance was poor around 60mpps.
My findings are:
1. changed the PMD to a core where no other services were running.
2. added many queues (around 64).
3. after a few tests i could see that i would loose traffic i would reset
the VM and then again i would be able to get double to throughput for about
a test or two (each test 3 min)

thank you for answering,
Michael



On Fri, Apr 20, 2018 at 12:25 PM, Mooney, Sean K <sean.k.moo...@intel.com>
wrote:

>
>
>
>
> *From:* Stokes, Ian
> *Sent:* Thursday, April 19, 2018 9:51 PM
> *To:* michael me <1michaelmesgu...@gmail.com>
> *Cc:* ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com
> >
> *Subject:* RE: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Michael,
>
>
>
> “It will be split between queues based on des tip so it’s important that
> test traffic varies if you want traffic to be dispersed evenly among the
> queues at this level."
>
>
>
> “Des tip” should be destination IP (apologies, I replied before having a
> morning coffee J).
>
>
>
> By varying the traffic I mean changing the destination IP, if using the
> same IP I believe the rss hash will evaluate to the same queue  on the NIC.
>
>
>
> I’m not an expert on Openstack so I’m not too sure how to enable multi
> queue for vhost interfaces in that case.
>
>
>
> @ Sean (cc’d): Is there a specific way to enable vhost multi queue for
> open stack?
>
> *[Mooney, Sean K] yes to enable vhost multi queue in openstack you need to
> set an image metadata key to request it. that will result in 1 queue per
> vCPU of the guest.*
>
> *The key should be defiend here
> https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json
> <https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json>
> but its missing the key you want to add*
>
> *hw_vif_mutliqueue_enabled its is documented here
> https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html
> <https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html>. I
> should probably open a bug to add it to the glance*
>
> *default metadata refs.*
>
>
>
> I haven’t run MQ with a single PMD, so I’m not sure why you have better
> performance. Leave this with me to investigate further. I suspect as you
> have multiple queues more traffic is enqueued at the NIC leval
>
> *[Mooney, Sean K] for kernel virtio-net in the guest I belive there is a
> performance improvement due to reduction in intenal contentiuon from locks
> in the guset kernel networking stack but with dpdk in the guest I think the*
>
> *Perfromace would normally be the same however if the bottleneck you are
> hitting is on vswitch tx to the guest then perhaps that will also benefit
> form multiqueu howver unless you guest has more queues/cores then host *
>
> *pmds you would still have to use spin locks in the vhost pmd as you
> clould not setup a 1:1 pmd mapping to allow lockless enqueue in to the
> guest.*
>
>
>
> The problem with only 1 queue for the VM is that is creates a bottleneck
> in terms of transmitting traffic from the host to the VM (in your case 8
> queues trying to enqueue to 1 queue).
>
>
>
> How are you isolating core 0? Are you using isolcpus? Normally I would
> suggest isolating core 2 (i.e. the pmd core) with isolcpu.
>
>
>
> When you say you set txq =1 , why is that?
>
>
>
> Typically txq is set automatically, it will be number of PMDs +1 (in your
> case 2 txqs in total). The +1 is to account for traffic from kernel space.
>
>
>
> Thanks
>
> Ian
>
>
>
> *From:* michael me [mailto:1michaelmesgu...@gmail.com
> <1michaelmesgu...@gmail.com>]
> *Sent:* Thursday, April 19, 2018 7:12 PM
> *To:* Stokes, Ian <ian.sto...@intel.com>
> *Cc:* ovs-discuss@openvswitch.org
> *Subject:* Re: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Ian,
>
>
>
> Thank you for you answers!
>
>
>
> it is correct that i am using ovs-vsctl set Interface dpdk0
> options:n_rxq=8 commands for the queues.
>
> Could you please expand on the sentence "  It will be split between
> queues based on des tip so it’s important that test traffic varies if you
> want traffic to be dispersed evenly among the queues at this level."
>
> It might be a typo, or i might just not know what you mean by "des tip",
> could you please clarify for me?
>
> Additionally, what do you mean by varying the traffic? do you mean to
> somehow

Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-20 Thread Mooney, Sean K


From: Stokes, Ian
Sent: Thursday, April 19, 2018 9:51 PM
To: michael me <1michaelmesgu...@gmail.com>
Cc: ovs-discuss@openvswitch.org; Mooney, Sean K <sean.k.moo...@intel.com>
Subject: RE: [ovs-discuss] ovs-dpdk performance not stable

Hi Michael,

“It will be split between queues based on des tip so it’s important that test 
traffic varies if you want traffic to be dispersed evenly among the queues at 
this level."

“Des tip” should be destination IP (apologies, I replied before having a 
morning coffee ☺).

By varying the traffic I mean changing the destination IP, if using the same IP 
I believe the rss hash will evaluate to the same queue  on the NIC.

I’m not an expert on Openstack so I’m not too sure how to enable multi queue 
for vhost interfaces in that case.

@ Sean (cc’d): Is there a specific way to enable vhost multi queue for open 
stack?
[Mooney, Sean K] yes to enable vhost multi queue in openstack you need to set 
an image metadata key to request it. that will result in 1 queue per vCPU of 
the guest.
The key should be defiend here 
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json
 but its missing the key you want to add
hw_vif_mutliqueue_enabled its is documented here 
https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html. I should 
probably open a bug to add it to the glance
default metadata refs.

I haven’t run MQ with a single PMD, so I’m not sure why you have better 
performance. Leave this with me to investigate further. I suspect as you have 
multiple queues more traffic is enqueued at the NIC leval
[Mooney, Sean K] for kernel virtio-net in the guest I belive there is a 
performance improvement due to reduction in intenal contentiuon from locks in 
the guset kernel networking stack but with dpdk in the guest I think the
Perfromace would normally be the same however if the bottleneck you are hitting 
is on vswitch tx to the guest then perhaps that will also benefit form 
multiqueu howver unless you guest has more queues/cores then host
pmds you would still have to use spin locks in the vhost pmd as you clould not 
setup a 1:1 pmd mapping to allow lockless enqueue in to the guest.

The problem with only 1 queue for the VM is that is creates a bottleneck in 
terms of transmitting traffic from the host to the VM (in your case 8 queues 
trying to enqueue to 1 queue).

How are you isolating core 0? Are you using isolcpus? Normally I would suggest 
isolating core 2 (i.e. the pmd core) with isolcpu.

When you say you set txq =1 , why is that?

Typically txq is set automatically, it will be number of PMDs +1 (in your case 
2 txqs in total). The +1 is to account for traffic from kernel space.

Thanks
Ian

From: michael me [mailto:1michaelmesgu...@gmail.com]
Sent: Thursday, April 19, 2018 7:12 PM
To: Stokes, Ian <ian.sto...@intel.com<mailto:ian.sto...@intel.com>>
Cc: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] ovs-dpdk performance not stable

Hi Ian,

Thank you for you answers!

it is correct that i am using ovs-vsctl set Interface dpdk0 options:n_rxq=8 
commands for the queues.
Could you please expand on the sentence "  It will be split between queues 
based on des tip so it’s important that test traffic varies if you want traffic 
to be dispersed evenly among the queues at this level."
It might be a typo, or i might just not know what you mean by "des tip", could 
you please clarify for me?
Additionally, what do you mean by varying the traffic? do you mean to somehow 
not have the packets at a constant frame rate?

Regarding the Vhost user queues, i am using Openstack and i did not find yet a 
way to create multiple queues (i updated the image's metadata 
hw_vif_multiqueue_enabled=true) but i don't know how to set the queue amount 
especially that in the VM that i am running i do not have ethtool.

Regarding the multiple queues while using one core for the PMD:
i did get much better performance when i had two cores for the PMD, however, i 
am not at the luxury to be able to use two cores.
It is puzzling for me that when i use multiple queues i do get better 
performance not enough but much better then when i use only one.
I am sorry but this is a confusing for me.

As for the core isolation, i have only core zero isolated for the kernel. i 
checked with htop and i saw that probably the emulatorpin of the VM might be 
running there so i moved it but it decreased performance.
when i use only n_rxq and n_txq=1 i get performance close to 60MB with 64 
packets.

Thank you again,
Michael





On Thu, Apr 19, 2018 at 11:10 AM, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
Hi Michael,

So there are a few issues here we need to address.

Queues for phy devices:

I assume you have set the queues for dpdk0 and dpdk1 yourself using

ovs-vsctl set Interface dpdk0 options:n_rxq=8
ovs-vsctl set Interface dpdk0 option

Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-19 Thread michael me
Hi Ian,

Thank you for you answers!

it is correct that i am using ovs-vsctl set Interface dpdk0 options:n_rxq=8
commands for the queues.
Could you please expand on the sentence "  It will be split between queues
based on des tip so it’s important that test traffic varies if you want
traffic to be dispersed evenly among the queues at this level."
It might be a typo, or i might just not know what you mean by "des tip",
could you please clarify for me?
Additionally, what do you mean by varying the traffic? do you mean to
somehow not have the packets at a constant frame rate?

Regarding the Vhost user queues, i am using Openstack and i did not find
yet a way to create multiple queues (i updated the image's
metadata hw_vif_multiqueue_enabled=true) but i don't know how to set the
queue amount especially that in the VM that i am running i do not have
ethtool.

Regarding the multiple queues while using one core for the PMD:
i did get much better performance when i had two cores for the PMD,
however, i am not at the luxury to be able to use two cores.
It is puzzling for me that when i use multiple queues i do get better
performance not enough but much better then when i use only one.
I am sorry but this is a confusing for me.

As for the core isolation, i have only core zero isolated for the kernel. i
checked with htop and i saw that probably the emulatorpin of the VM might
be running there so i moved it but it decreased performance.
when i use only n_rxq and n_txq=1 i get performance close to 60MB with 64
packets.

Thank you again,
Michael





On Thu, Apr 19, 2018 at 11:10 AM, Stokes, Ian <ian.sto...@intel.com> wrote:

> Hi Michael,
>
>
>
> So there are a few issues here we need to address.
>
>
>
> Queues for phy devices:
>
>
>
> I assume you have set the queues for dpdk0 and dpdk1 yourself using
>
>
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
>
>
> Receive Side Scaling (RSS) is used to distribute ingress traffic among the
> queues on the NIC at a hardware level. It will be split between queues
> based on des tip so it’s important that test traffic varies if you want
> traffic to be dispersed evenly among the queues at this level.
>
>
>
> Vhost user queues:
>
>
>
> You do not have to set the number of queues for vhost ports with n_rxq
> since OVS 2.6 as done above but you do have to include the number of
> supported queues in the QEMU command line argument that launches the VM by
> specifying the argument queues=’Num_Queues’ for the vhost port. If using VM
> Kernel virtio interfaces within the VM you will need to enable the extra
> queues also using ethtool –L. Seeing that there is only 1 queue for your
> vhost user port I think you are missing one of these steps.
>
>
>
> PMD configuration:
>
>
>
> Since your only using 1 PMD I don’t see much point of using multiple
> queues. Typically you match the number of PMDs to the number of queues that
> you would like to ensure an even distribution.
>
> If  using 1 PMD like in your case the traffic will always be enqueued to
> queue 0 of vhost device even if there are multiple queues available. This
> is related to the implantation within OVS.
>
>
>
> As a starting point it might be easier to start with 2 PMDs and 2 rxqs for
> each phy and vhost ports that you have and ensure that works first.
>
>
>
> Also are you isolating the cores the PMD runs on? If not then processes
> could be scheduled to that core which would interrupt the PMD processing,
> this could be related to the traffic drops you see.
>
>
>
> Below is a link to a blog that discusses vhost MQ, it uses OVS 2.5 but a
> lot of the core concepts still apply even if some of the configuration
> commands may have changed
>
>
>
> https://software.intel.com/en-us/articles/configure-vhost-
> user-multiqueue-for-ovs-with-dpdk
>
>
>
> Ian
>
>
>
> *From:* michael me [mailto:1michaelmesgu...@gmail.com]
> *Sent:* Wednesday, April 18, 2018 2:23 PM
> *To:* Stokes, Ian <ian.sto...@intel.com>
> *Cc:* ovs-discuss@openvswitch.org
> *Subject:* Re: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Ian,
>
>
>
> In the deployment i do have vhost user; below is the full output of the  
> ovs-appctl
> dpif-netdev/pmd-rxq-show  command.
>
> root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
> isolated : false
>
> port: dpdk1 queue-id: 0 1 2 3 4 5 6 7
>
> port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
>
> port: vhu1cbd23fd-82queue-id: 0
>
> port: vhu018b3f01-39queue-id: 0
>
>
>
> what is strange for me and i don't under

Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-19 Thread Stokes, Ian
Hi Michael,

So there are a few issues here we need to address.

Queues for phy devices:

I assume you have set the queues for dpdk0 and dpdk1 yourself using

ovs-vsctl set Interface dpdk0 options:n_rxq=8
ovs-vsctl set Interface dpdk0 options:n_rxq=8

Receive Side Scaling (RSS) is used to distribute ingress traffic among the 
queues on the NIC at a hardware level. It will be split between queues based on 
des tip so it’s important that test traffic varies if you want traffic to be 
dispersed evenly among the queues at this level.

Vhost user queues:

You do not have to set the number of queues for vhost ports with n_rxq since 
OVS 2.6 as done above but you do have to include the number of supported queues 
in the QEMU command line argument that launches the VM by specifying the 
argument queues=’Num_Queues’ for the vhost port. If using VM Kernel virtio 
interfaces within the VM you will need to enable the extra queues also using 
ethtool –L. Seeing that there is only 1 queue for your vhost user port I think 
you are missing one of these steps.

PMD configuration:

Since your only using 1 PMD I don’t see much point of using multiple queues. 
Typically you match the number of PMDs to the number of queues that you would 
like to ensure an even distribution.
If  using 1 PMD like in your case the traffic will always be enqueued to queue 
0 of vhost device even if there are multiple queues available. This is related 
to the implantation within OVS.

As a starting point it might be easier to start with 2 PMDs and 2 rxqs for each 
phy and vhost ports that you have and ensure that works first.

Also are you isolating the cores the PMD runs on? If not then processes could 
be scheduled to that core which would interrupt the PMD processing, this could 
be related to the traffic drops you see.

Below is a link to a blog that discusses vhost MQ, it uses OVS 2.5 but a lot of 
the core concepts still apply even if some of the configuration commands may 
have changed

https://software.intel.com/en-us/articles/configure-vhost-user-multiqueue-for-ovs-with-dpdk

Ian

From: michael me [mailto:1michaelmesgu...@gmail.com]
Sent: Wednesday, April 18, 2018 2:23 PM
To: Stokes, Ian <ian.sto...@intel.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] ovs-dpdk performance not stable

Hi Ian,

In the deployment i do have vhost user; below is the full output of the  
ovs-appctl dpif-netdev/pmd-rxq-show  command.
root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk1 queue-id: 0 1 2 3 4 5 6 7
port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
port: vhu1cbd23fd-82queue-id: 0
port: vhu018b3f01-39queue-id: 0

what is strange for me and i don't understand is why do i have only one queue 
in the vhost side and eight on the dpdk side. i understood that qemue 
automatically had the same amount. though, i am using only one core for the VM 
and one core for the PMD.
in this setting i have eight cores in the system, is that the reason that i see 
eight possible queues?
The setup is North/South (VM to Physical network)
as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).

when i set the n_rxq and n_txq to high values (even 64 or above) i see no drops 
for around a minute or two and then suddenly bursts of drops as if the cache 
was filled. Have you seen something similar?
i tried to play with the "max-idle", but it didn't seem to help.

originally, i had a setup with 2.9 and 17.11 and i was not able to get better, 
performance but it could be that i didn't tweak as much. However, i am trying 
to deploy a setup that i can install without needing to MAKE.

Thank you for any input,
Michael

On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian 
<ian.sto...@intel.com<mailto:ian.sto...@intel.com>> wrote:
Hi Michael,

Are you using dpdk vhostuser ports in this deployment?

I would expect to see them listed in the output of ovs-appctl 
dpif-netdev/pmd-rxq-show you posted below.

Can you describe the expected traffic flow ( Is it North/South using DPDK phy 
devices as well as vhost devices or east/west between vm interfaces only).

OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to 
specific PMDs also. This can help provide more stable throughput and defined 
behavior. Without doing this I believe the distribution of rxqs was dealt with 
in a round robin manner which could change between deployments. This could 
explain what you are seeing i.e. sometimes the traffic runs without drops.

You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic is 
dropping and then again when traffic is passing without issue. This output 
along with the flows in each case might provide a clue as to what is happening. 
If there is a difference between the two you could investigate pinning the rxqs 
to the specific setup although you will only benefit from this when have at 
least 2 PMDs instead 

Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-18 Thread michael me
Hi Ian,

In the deployment i do have vhost user; below is the full output of
the  ovs-appctl
dpif-netdev/pmd-rxq-show  command.
root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk1 queue-id: 0 1 2 3 4 5 6 7
port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
port: vhu1cbd23fd-82queue-id: 0
port: vhu018b3f01-39queue-id: 0

what is strange for me and i don't understand is why do i have only one
queue in the vhost side and eight on the dpdk side. i understood that qemue
automatically had the same amount. though, i am using only one core for the
VM and one core for the PMD.
in this setting i have eight cores in the system, is that the reason that i
see eight possible queues?
The setup is North/South (VM to Physical network)
as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).

when i set the n_rxq and n_txq to high values (even 64 or above) i see no
drops for around a minute or two and then suddenly bursts of drops as if
the cache was filled. Have you seen something similar?
i tried to play with the "max-idle", but it didn't seem to help.

originally, i had a setup with 2.9 and 17.11 and i was not able to get
better, performance but it could be that i didn't tweak as much. However, i
am trying to deploy a setup that i can install without needing to MAKE.

Thank you for any input,
Michael

On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian  wrote:

> Hi Michael,
>
>
>
> Are you using dpdk vhostuser ports in this deployment?
>
>
>
> I would expect to see them listed in the output of ovs-appctl
> dpif-netdev/pmd-rxq-show you posted below.
>
>
>
> Can you describe the expected traffic flow ( Is it North/South using DPDK
> phy devices as well as vhost devices or east/west between vm interfaces
> only).
>
>
>
> OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to
> specific PMDs also. This can help provide more stable throughput and
> defined behavior. Without doing this I believe the distribution of rxqs was
> dealt with in a round robin manner which could change between deployments.
> This could explain what you are seeing i.e. sometimes the traffic runs
> without drops.
>
>
>
> You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic
> is dropping and then again when traffic is passing without issue. This
> output along with the flows in each case might provide a clue as to what is
> happening. If there is a difference between the two you could investigate
> pinning the rxqs to the specific setup although you will only benefit from
> this when have at least 2 PMDs instead of 1.
>
>
>
> Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have
> you tried the same tests using the latest OVS 2.9 and DPDK 17.11?
>
>
>
> Ian
>
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *michael me
> *Sent:* Tuesday, April 17, 2018 10:42 AM
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Everyone,
>
>
>
> I would greatly appreciate any input.
>
>
>
> The setting that i am working with is a host with ovs-dpdk connected to a
> VM.
>
>
>
> What i see when i do performance test is that after about a minute or two
> suddenly i have many drops as if the cache was full and was dumped
> improperly.
>
> I tried to play with the settings of the n-rxq and n_txq values, which
> helps but only probably until the cache is filled and then i have drops.
>
> The things is that sometimes, rarely, as if by chance the performance
> continues.
>
>
>
> My settings is as follows:
>
> OVS Version. 2.6.1
> DPDK Version. 16.07.2
> NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354
> (rev 03)
> pmd-cpu-mask. on core 1 mask=0x2
> lcore mask. core zeor "dpdk-lcore-mask=1"
>
>
>
> Port "dpdk0"
>
> Interface "dpdk0"
>
> type: dpdk
>
> options: {n_rxq="8", n_rxq_desc="2048", n_txq="9",
> n_txq_desc="2048"}
>
>
>
> ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
> isolated : false
>
> port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
>
> port: dpdk1 queue-id: 0 1 2 3 4 5 6 7
>
>
>
> Thanks,
>
> Michael
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk performance not stable

2018-04-17 Thread Stokes, Ian
Hi Michael,

Are you using dpdk vhostuser ports in this deployment?

I would expect to see them listed in the output of ovs-appctl 
dpif-netdev/pmd-rxq-show you posted below.

Can you describe the expected traffic flow ( Is it North/South using DPDK phy 
devices as well as vhost devices or east/west between vm interfaces only).

OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to 
specific PMDs also. This can help provide more stable throughput and defined 
behavior. Without doing this I believe the distribution of rxqs was dealt with 
in a round robin manner which could change between deployments. This could 
explain what you are seeing i.e. sometimes the traffic runs without drops.

You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic is 
dropping and then again when traffic is passing without issue. This output 
along with the flows in each case might provide a clue as to what is happening. 
If there is a difference between the two you could investigate pinning the rxqs 
to the specific setup although you will only benefit from this when have at 
least 2 PMDs instead of 1.

Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have you 
tried the same tests using the latest OVS 2.9 and DPDK 17.11?

Ian

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of michael me
Sent: Tuesday, April 17, 2018 10:42 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] ovs-dpdk performance not stable

Hi Everyone,

I would greatly appreciate any input.

The setting that i am working with is a host with ovs-dpdk connected to a VM.

What i see when i do performance test is that after about a minute or two 
suddenly i have many drops as if the cache was full and was dumped improperly.
I tried to play with the settings of the n-rxq and n_txq values, which helps 
but only probably until the cache is filled and then i have drops.
The things is that sometimes, rarely, as if by chance the performance continues.

My settings is as follows:
OVS Version. 2.6.1
DPDK Version. 16.07.2
NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354 (rev 
03)
pmd-cpu-mask. on core 1 mask=0x2
lcore mask. core zeor "dpdk-lcore-mask=1"

Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {n_rxq="8", n_rxq_desc="2048", n_txq="9", 
n_txq_desc="2048"}

ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk0 queue-id: 0 1 2 3 4 5 6 7
port: dpdk1 queue-id: 0 1 2 3 4 5 6 7

Thanks,
Michael
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

2018-02-19 Thread Avi Cohen (A)
Olga
i'm familiar with the procedure. I can build dpdk 17.11 but has this 
compilation error in 17.08
Here https://community.mellanox.com/thread/3545  the answer to the same error  
is to install another OFED - when I install this OFED, it breaks the build of 
17.11 and 17.08
Best Regards
Avi


> -Original Message-
> From: Olga Shern [mailto:ol...@mellanox.com]
> Sent: Monday, 19 February, 2018 5:08 PM
> To: Avi Cohen (A); ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Please refer to mlx5 DPDK guide: http://dpdk.org/doc/guides-
> 17.11/nics/mlx5.html
> 
> -Original Message-
> From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> Sent: Monday, February 19, 2018 3:51 PM
> To: Olga Shern ; ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Olga
> Now I have compilation error
> /home/avi/dpdk-17.08/drivers/net/mlx5/mlx5_rxtx.h:46:32: fatal error:
> infiniband/mlx5_hw.h: No such file or directory
> 
> > -Original Message-
> > From: Avi Cohen (A)
> > Sent: Monday, 19 February, 2018 3:41 PM
> > To: 'Olga Shern'; ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Ohh. - I've downgrade to 17.08 (from 17.11) and  forgot to set this flag .
> > checking now...
> >
> > > -Original Message-
> > > From: Olga Shern [mailto:ol...@mellanox.com]
> > > Sent: Monday, 19 February, 2018 3:26 PM
> > > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Did you compile DPDK with mlx5 PMD enabled?
> > >
> > > -Original Message-
> > > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > > Sent: Monday, February 19, 2018 3:20 PM
> > > To: Olga Shern ; ovs-discuss@openvswitch.org
> > > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Thank you Olga
> > > I did specify the port name as ibv_devinfo output - but still I'm
> > > getting error
> > > msg:
> > >
> > >
> > > Feb 19 15:16:03 Pizza05 ovs-vsctl: ovs|1|vsctl|INFO|Called as
> > > ovs-vsctl -- timeout 10 add-port br-phy dpdk0 -- set Interface dpdk0
> > > type=dpdk
> > > options:dpdk-devargs=mlx5_0 Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > > ovs|00044|dpdk|ERR|EAL: Unable to find a bus for the device 'mlx5_0'
> > > Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > > ovs|00045|netdev_dpdk|WARN|Error attaching device 'mlx5_0' to DPDK
> > > ovs|00045|Feb
> > > 19 15:16:03 Pizza05 ovs-vswitchd[29371]: ovs|00046|netdev|WARN|dpdk0:
> > > could not set configuration (Invalid argument
> > >
> > > Best Regards
> > > Avi
> > >
> > > > -Original Message-
> > > > From: Olga Shern [mailto:ol...@mellanox.com]
> > > > Sent: Monday, 19 February, 2018 2:58 PM
> > > > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > > > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> > > >
> > > > Hi Avi,
> > > >
> > > > Please try the following commandovs-vsctl --timeout 10 add-port br-
> phy
> > > > dpdk0   -- set Interface dpdk0 type=dpdk options:dpdk-devargs=mlx5_0
> > > >
> > > > You need to specify port name, mlx5_0 or mlx5_1 according
> > > > ibv_devinfo output
> > > >
> > > > Starting DPDK 18.02 and 17.11.2 you will be able to use PCI
> > > > address as OVS devargs parameters.
> > > >
> > > > Best Regards,
> > > > Olga
> > > >
> > > > -Original Message-
> > > > From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> > > > boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
> > > > Sent: Monday, February 19, 2018 2:33 PM
> > > > To: ovs-discuss@openvswitch.org
> > > > Subject: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port
> > > >
> > > > Hi
> > > > I cannot add mellanox dpdk port - and  I don't find any reference
> > > > for this for new ovs versions I get  an error message when typing
> > > > this
> > command
> > > > ovs-vsctl --timeout 10 add-port br-phy dpdk0   -- set Interface dpdk0
> > > type=dpdk
> > > > options:dpdk-devargs=:04:00:0
> > > > I'm running:
> > > > Openvswitch 2.8.1 ; dpdk-17.08 ' mellanox - connectx-4
> > > >
> > > > Best Regards
> > > > Avi
> > > > ___
> > > > discuss mailing list
> > > > disc...@openvswitch.org
> > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2
> > > > Fm
> > > > ai
> > > > l.op
> > > > envswitch.org%2Fmailman%2Flistinfo%2Fovs-
> > > >
> > >
> >
> discuss=02%7C01%7Colgas%40mellanox.com%7Cf91254be9a904a48d525
> > > >
> > >
> >
> 08d57794fdf6%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636546
> > > >
> > >
> >
> 404170173730=eYSbGlSDbizK%2Fdz8CKpfiRQDj4mqnrKugSuYk5Fi2M4%3
> > > > D=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

2018-02-19 Thread Olga Shern
Please refer to mlx5 DPDK guide: 
http://dpdk.org/doc/guides-17.11/nics/mlx5.html 

-Original Message-
From: Avi Cohen (A) [mailto:avi.co...@huawei.com] 
Sent: Monday, February 19, 2018 3:51 PM
To: Olga Shern ; ovs-discuss@openvswitch.org
Subject: RE: ovs-dpdk cannot add a dpdk mellanox port

Olga
Now I have compilation error
/home/avi/dpdk-17.08/drivers/net/mlx5/mlx5_rxtx.h:46:32: fatal error: 
infiniband/mlx5_hw.h: No such file or directory

> -Original Message-
> From: Avi Cohen (A)
> Sent: Monday, 19 February, 2018 3:41 PM
> To: 'Olga Shern'; ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Ohh. - I've downgrade to 17.08 (from 17.11) and  forgot to set this flag .
> checking now...
> 
> > -Original Message-
> > From: Olga Shern [mailto:ol...@mellanox.com]
> > Sent: Monday, 19 February, 2018 3:26 PM
> > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Did you compile DPDK with mlx5 PMD enabled?
> >
> > -Original Message-
> > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > Sent: Monday, February 19, 2018 3:20 PM
> > To: Olga Shern ; ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Thank you Olga
> > I did specify the port name as ibv_devinfo output - but still I'm 
> > getting error
> > msg:
> >
> >
> > Feb 19 15:16:03 Pizza05 ovs-vsctl: ovs|1|vsctl|INFO|Called as 
> > ovs-vsctl -- timeout 10 add-port br-phy dpdk0 -- set Interface dpdk0 
> > type=dpdk
> > options:dpdk-devargs=mlx5_0 Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > ovs|00044|dpdk|ERR|EAL: Unable to find a bus for the device 'mlx5_0'
> > Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > ovs|00045|netdev_dpdk|WARN|Error attaching device 'mlx5_0' to DPDK 
> > ovs|00045|Feb
> > 19 15:16:03 Pizza05 ovs-vswitchd[29371]: ovs|00046|netdev|WARN|dpdk0:
> > could not set configuration (Invalid argument
> >
> > Best Regards
> > Avi
> >
> > > -Original Message-
> > > From: Olga Shern [mailto:ol...@mellanox.com]
> > > Sent: Monday, 19 February, 2018 2:58 PM
> > > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Hi Avi,
> > >
> > > Please try the following commandovs-vsctl --timeout 10 add-port br-phy
> > > dpdk0   -- set Interface dpdk0 type=dpdk options:dpdk-devargs=mlx5_0
> > >
> > > You need to specify port name, mlx5_0 or mlx5_1 according 
> > > ibv_devinfo output
> > >
> > > Starting DPDK 18.02 and 17.11.2 you will be able to use PCI 
> > > address as OVS devargs parameters.
> > >
> > > Best Regards,
> > > Olga
> > >
> > > -Original Message-
> > > From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss- 
> > > boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
> > > Sent: Monday, February 19, 2018 2:33 PM
> > > To: ovs-discuss@openvswitch.org
> > > Subject: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Hi
> > > I cannot add mellanox dpdk port - and  I don't find any reference 
> > > for this for new ovs versions I get  an error message when typing 
> > > this
> command
> > > ovs-vsctl --timeout 10 add-port br-phy dpdk0   -- set Interface dpdk0
> > type=dpdk
> > > options:dpdk-devargs=:04:00:0
> > > I'm running:
> > > Openvswitch 2.8.1 ; dpdk-17.08 ' mellanox - connectx-4
> > >
> > > Best Regards
> > > Avi
> > > ___
> > > discuss mailing list
> > > disc...@openvswitch.org
> > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2
> > > Fm
> > > ai
> > > l.op
> > > envswitch.org%2Fmailman%2Flistinfo%2Fovs-
> > >
> >
> discuss=02%7C01%7Colgas%40mellanox.com%7Cf91254be9a904a48d525
> > >
> >
> 08d57794fdf6%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636546
> > >
> >
> 404170173730=eYSbGlSDbizK%2Fdz8CKpfiRQDj4mqnrKugSuYk5Fi2M4%3
> > > D=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

2018-02-19 Thread Avi Cohen (A)
Olga
Now I have compilation error
/home/avi/dpdk-17.08/drivers/net/mlx5/mlx5_rxtx.h:46:32: fatal error: 
infiniband/mlx5_hw.h: No such file or directory

> -Original Message-
> From: Avi Cohen (A)
> Sent: Monday, 19 February, 2018 3:41 PM
> To: 'Olga Shern'; ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Ohh. - I've downgrade to 17.08 (from 17.11) and  forgot to set this flag .
> checking now...
> 
> > -Original Message-
> > From: Olga Shern [mailto:ol...@mellanox.com]
> > Sent: Monday, 19 February, 2018 3:26 PM
> > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Did you compile DPDK with mlx5 PMD enabled?
> >
> > -Original Message-
> > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > Sent: Monday, February 19, 2018 3:20 PM
> > To: Olga Shern ; ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Thank you Olga
> > I did specify the port name as ibv_devinfo output - but still I'm
> > getting error
> > msg:
> >
> >
> > Feb 19 15:16:03 Pizza05 ovs-vsctl: ovs|1|vsctl|INFO|Called as
> > ovs-vsctl -- timeout 10 add-port br-phy dpdk0 -- set Interface dpdk0
> > type=dpdk
> > options:dpdk-devargs=mlx5_0 Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > ovs|00044|dpdk|ERR|EAL: Unable to find a bus for the device 'mlx5_0'
> > Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> > ovs|00045|netdev_dpdk|WARN|Error attaching device 'mlx5_0' to DPDK Feb
> > 19 15:16:03 Pizza05 ovs-vswitchd[29371]: ovs|00046|netdev|WARN|dpdk0:
> > could not set configuration (Invalid argument
> >
> > Best Regards
> > Avi
> >
> > > -Original Message-
> > > From: Olga Shern [mailto:ol...@mellanox.com]
> > > Sent: Monday, 19 February, 2018 2:58 PM
> > > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Hi Avi,
> > >
> > > Please try the following commandovs-vsctl --timeout 10 add-port br-phy
> > > dpdk0   -- set Interface dpdk0 type=dpdk options:dpdk-devargs=mlx5_0
> > >
> > > You need to specify port name, mlx5_0 or mlx5_1 according
> > > ibv_devinfo output
> > >
> > > Starting DPDK 18.02 and 17.11.2 you will be able to use PCI address
> > > as OVS devargs parameters.
> > >
> > > Best Regards,
> > > Olga
> > >
> > > -Original Message-
> > > From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> > > boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
> > > Sent: Monday, February 19, 2018 2:33 PM
> > > To: ovs-discuss@openvswitch.org
> > > Subject: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port
> > >
> > > Hi
> > > I cannot add mellanox dpdk port - and  I don't find any reference
> > > for this for new ovs versions I get  an error message when typing this
> command
> > > ovs-vsctl --timeout 10 add-port br-phy dpdk0   -- set Interface dpdk0
> > type=dpdk
> > > options:dpdk-devargs=:04:00:0
> > > I'm running:
> > > Openvswitch 2.8.1 ; dpdk-17.08 ' mellanox - connectx-4
> > >
> > > Best Regards
> > > Avi
> > > ___
> > > discuss mailing list
> > > disc...@openvswitch.org
> > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fm
> > > ai
> > > l.op
> > > envswitch.org%2Fmailman%2Flistinfo%2Fovs-
> > >
> >
> discuss=02%7C01%7Colgas%40mellanox.com%7Cf91254be9a904a48d525
> > >
> >
> 08d57794fdf6%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636546
> > >
> >
> 404170173730=eYSbGlSDbizK%2Fdz8CKpfiRQDj4mqnrKugSuYk5Fi2M4%3
> > > D=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

2018-02-19 Thread Avi Cohen (A)
Ohh. - I've downgrade to 17.08 (from 17.11) and  forgot to set this flag . 
checking now...

> -Original Message-
> From: Olga Shern [mailto:ol...@mellanox.com]
> Sent: Monday, 19 February, 2018 3:26 PM
> To: Avi Cohen (A); ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Did you compile DPDK with mlx5 PMD enabled?
> 
> -Original Message-
> From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> Sent: Monday, February 19, 2018 3:20 PM
> To: Olga Shern ; ovs-discuss@openvswitch.org
> Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> 
> Thank you Olga
> I did specify the port name as ibv_devinfo output - but still I'm getting 
> error
> msg:
> 
> 
> Feb 19 15:16:03 Pizza05 ovs-vsctl: ovs|1|vsctl|INFO|Called as ovs-vsctl --
> timeout 10 add-port br-phy dpdk0 -- set Interface dpdk0 type=dpdk
> options:dpdk-devargs=mlx5_0 Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> ovs|00044|dpdk|ERR|EAL: Unable to find a bus for the device 'mlx5_0'
> Feb 19 15:16:03 Pizza05 ovs-vswitchd[29371]:
> ovs|00045|netdev_dpdk|WARN|Error attaching device 'mlx5_0' to DPDK Feb
> 19 15:16:03 Pizza05 ovs-vswitchd[29371]: ovs|00046|netdev|WARN|dpdk0:
> could not set configuration (Invalid argument
> 
> Best Regards
> Avi
> 
> > -Original Message-
> > From: Olga Shern [mailto:ol...@mellanox.com]
> > Sent: Monday, 19 February, 2018 2:58 PM
> > To: Avi Cohen (A); ovs-discuss@openvswitch.org
> > Subject: RE: ovs-dpdk cannot add a dpdk mellanox port
> >
> > Hi Avi,
> >
> > Please try the following commandovs-vsctl --timeout 10 add-port br-phy
> > dpdk0   -- set Interface dpdk0 type=dpdk options:dpdk-devargs=mlx5_0
> >
> > You need to specify port name, mlx5_0 or mlx5_1 according  ibv_devinfo
> > output
> >
> > Starting DPDK 18.02 and 17.11.2 you will be able to use PCI address as
> > OVS devargs parameters.
> >
> > Best Regards,
> > Olga
> >
> > -Original Message-
> > From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> > boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
> > Sent: Monday, February 19, 2018 2:33 PM
> > To: ovs-discuss@openvswitch.org
> > Subject: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port
> >
> > Hi
> > I cannot add mellanox dpdk port - and  I don't find any reference for
> > this for new ovs versions I get  an error message when typing this command
> > ovs-vsctl --timeout 10 add-port br-phy dpdk0   -- set Interface dpdk0
> type=dpdk
> > options:dpdk-devargs=:04:00:0
> > I'm running:
> > Openvswitch 2.8.1 ; dpdk-17.08 ' mellanox - connectx-4
> >
> > Best Regards
> > Avi
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmai
> > l.op
> > envswitch.org%2Fmailman%2Flistinfo%2Fovs-
> >
> discuss=02%7C01%7Colgas%40mellanox.com%7Cf91254be9a904a48d525
> >
> 08d57794fdf6%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636546
> >
> 404170173730=eYSbGlSDbizK%2Fdz8CKpfiRQDj4mqnrKugSuYk5Fi2M4%3
> > D=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

2018-02-19 Thread Olga Shern
Hi Avi, 

Please try the following commandovs-vsctl --timeout 10 add-port br-phy 
dpdk0   -- set Interface dpdk0 type=dpdk options:dpdk-devargs=mlx5_0 

You need to specify port name, mlx5_0 or mlx5_1 according  ibv_devinfo output 

Starting DPDK 18.02 and 17.11.2 you will be able to use PCI address as OVS   
devargs parameters. 

Best Regards,
Olga

-Original Message-
From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Avi Cohen (A)
Sent: Monday, February 19, 2018 2:33 PM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] ovs-dpdk cannot add a dpdk mellanox port

Hi
I cannot add mellanox dpdk port - and  I don't find any reference for this for 
new ovs versions I get  an error message when typing this command
ovs-vsctl --timeout 10 add-port br-phy dpdk0   -- set Interface dpdk0 type=dpdk 
options:dpdk-devargs=:04:00:0
I'm running:
Openvswitch 2.8.1 ; dpdk-17.08 ' mellanox - connectx-4

Best Regards
Avi
___
discuss mailing list
disc...@openvswitch.org
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss=02%7C01%7Colgas%40mellanox.com%7Cf91254be9a904a48d52508d57794fdf6%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636546404170173730=eYSbGlSDbizK%2Fdz8CKpfiRQDj4mqnrKugSuYk5Fi2M4%3D=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dpdk] how to check link between guest vNIC and dpdkvhost socket

2017-10-26 Thread Flavio Leitner
On Thu, 26 Oct 2017 11:06:08 +0800
徐荣杰  wrote:

> Hi,
> 
> 
> 
> I do have the memAccess property as follows for both VM
> 
>
> 
>   
> 
>   
> 
>   
> 
> 
> 
>memAccess='shared'/>  
> 
>   


OK, then look if the socket path is correct.
Also if you don't have apparmor/selinux issues.
Look at the OVS logs and EAL initialization messages which might
give you a hint.  The qemu log is also important.

Usually the memory isn't shared or something is wrong that
the socket is not opened/functional for one side.

-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dpdk] how to check link between guest vNIC and dpdkvhost socket

2017-10-26 Thread Flavio Leitner
On Thu, 26 Oct 2017 02:29:16 +
"Xu, Rongjie (NSB - CN/Hangzhou)" <rongjie...@nokia-sbell.com> wrote:

> Hi,
> 
> I do have the memAccess property as follows for both VM
> 
>   
> 
> 
> 
>   
> 
>   


Yes, but that's only one part. You also need the memoryBacking section
to be hugepages.

fbl



> 
> Best Regards
> Xu Rongjie (Max)
> 
> -Original Message-
> From: ovs-discuss-boun...@openvswitch.org 
> [mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Flavio Leitner
> Sent: Thursday, October 26, 2017 09:50
> To: Xu, Rongjie (NSB - CN/Hangzhou) <rongjie...@nokia-sbell.com>
> Cc: ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] [ovs-dpdk] how to link between guest vNIC and 
> dpdkvhost socket
> 
> On Wed, 25 Oct 2017 07:36:13 +
> "Xu, Rongjie (NSB - CN/Hangzhou)" <rongjie...@nokia-sbell.com> wrote:
> 
> > Hi,
> > 
> > I have one OpenStack environment with OVS-DPDK (manually installed). I 
> > success to launch two VMs on two separate compute node with vhostuser 
> > interface type. But I cannot ping from one to another. I found there are 
> > even no packets in the local 'br-int' bridge (where dpdkvhostuser interface 
> > is attached). My guess is the link between guest vNIC and dpdkvhost socket 
> > at host side is somehow broken, but I do not know how to proceed. Anyone 
> > any suggestions?  
> 
> 
> Most probably you forgot to share the memory between host and guest.
> in the VM xml:
> 
> E.g.:
>   
> 
>   
> 
>   
> ...
> 
>   
> 
> ...
> 
> fbl
> 
> > Compute0:
> > Bridge br-int
> > Controller "tcp:127.0.0.1:6633"
> > is_connected: true
> > fail_mode: secure
> > Port "vhu2aae3928-3c"
> > tag: 1
> > Interface "vhu2aae3928-3c"
> > type: dpdkvhostuser
> > root@compute0:~# file /var/run/openvswitch/vhu2aae3928-3c
> > /var/run/openvswitch/vhu2aae3928-3c: socket
> > 
> > Compute1:
> > Bridge br-int
> > Controller "tcp:127.0.0.1:6633"
> > is_connected: true
> > fail_mode: secure
> > Port "vhuca25b420-6d"
> > tag: 1
> > Interface "vhuca25b420-6d"
> >type: dpdkvhostuser
> > root@compute1:~# file /var/run/openvswitch/vhuca25b420-6d
> > /var/run/openvswitch/vhuca25b420-6d: socket
> > 
> > 
> > When I ping, I even does not find ARP packets in the br-int flow tables.
> > root@compute0:~# ovs-ofctl dump-flows br-int | grep "in_port=3"
> > cookie=0xb177d87a58ae18fe, duration=1927.889s, table=0, n_packets=0, 
> > n_bytes=0, idle_age=1927, priority=10,icmp6,in_port=3,icmp_type=136 
> > actions=resubmit(,24)
> > cookie=0xb177d87a58ae18fe, duration=1927.886s, table=0, n_packets=0, 
> > n_bytes=0, idle_age=1927, priority=10,arp,in_port=3 actions=resubmit(,24)
> > cookie=0xb177d87a58ae18fe, duration=1927.893s, table=0, n_packets=0, 
> > n_bytes=0, idle_age=1927, priority=9,in_port=3 actions=resubmit(,25)
> > cookie=0xb177d87a58ae18fe, duration=1927.891s, table=24, n_packets=0, 
> > n_bytes=0, idle_age=1927, 
> > priority=2,icmp6,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:fed3:2116
> >  actions=NORMAL
> > cookie=0xb177d87a58ae18fe, duration=1927.888s, table=24, n_packets=0, 
> > n_bytes=0, idle_age=1927, priority=2,arp,in_port=3,arp_spa=192.168.100.13 
> > actions=resubmit(,25)
> > cookie=0xb177d87a58ae18fe, duration=1927.896s, table=25, n_packets=0, 
> > n_bytes=0, idle_age=1927, priority=2,in_port=3,dl_src=fa:16:3e:d3:21:16 
> > actions=NORMAL
> > 
> > root@compute0:~# ovs-ofctl show br-int
> > OFPT_FEATURES_REPLY (xid=0x2): dpid:caf6709a034a
> > n_tables:254, n_buffers:256
> > capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> > actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
> > mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
> > 1(int-br0): addr:7e:de:e2:7b:2a:83
> >  config: 0
> >  state:  0
> >  speed: 0 Mbps now, 0 Mbps max
> > 2(patch-tun): addr:5a:2a:d3:f0:14:72
> >  config: 0
> >  state:  0
> >  speed: 0 Mbps now, 0 Mbps max
> > 3(vhu2aae3928-3c): addr:00:00:00:00:00:00
> >  config: 0
> >  state:  0
> >  speed: 0 Mbps now, 0 Mbps max
> > 
> > Best Regards
> > Xu Rongjie (Max)
> >   
> 
> 
> 



-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dpdk] how to check link between guest vNIC and dpdkvhost socket

2017-10-25 Thread Xu, Rongjie (NSB - CN/Hangzhou)
Hi,

I do have the memAccess property as follows for both VM

  



  

  

Best Regards
Xu Rongjie (Max)

-Original Message-
From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of Flavio Leitner
Sent: Thursday, October 26, 2017 09:50
To: Xu, Rongjie (NSB - CN/Hangzhou) <rongjie...@nokia-sbell.com>
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] [ovs-dpdk] how to link between guest vNIC and 
dpdkvhost socket

On Wed, 25 Oct 2017 07:36:13 +
"Xu, Rongjie (NSB - CN/Hangzhou)" <rongjie...@nokia-sbell.com> wrote:

> Hi,
> 
> I have one OpenStack environment with OVS-DPDK (manually installed). I 
> success to launch two VMs on two separate compute node with vhostuser 
> interface type. But I cannot ping from one to another. I found there are even 
> no packets in the local 'br-int' bridge (where dpdkvhostuser interface is 
> attached). My guess is the link between guest vNIC and dpdkvhost socket at 
> host side is somehow broken, but I do not know how to proceed. Anyone any 
> suggestions?


Most probably you forgot to share the memory between host and guest.
in the VM xml:

E.g.:
  

  

  
...

  

...

fbl

> Compute0:
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "vhu2aae3928-3c"
> tag: 1
> Interface "vhu2aae3928-3c"
> type: dpdkvhostuser
> root@compute0:~# file /var/run/openvswitch/vhu2aae3928-3c
> /var/run/openvswitch/vhu2aae3928-3c: socket
> 
> Compute1:
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "vhuca25b420-6d"
> tag: 1
> Interface "vhuca25b420-6d"
>type: dpdkvhostuser
> root@compute1:~# file /var/run/openvswitch/vhuca25b420-6d
> /var/run/openvswitch/vhuca25b420-6d: socket
> 
> 
> When I ping, I even does not find ARP packets in the br-int flow tables.
> root@compute0:~# ovs-ofctl dump-flows br-int | grep "in_port=3"
> cookie=0xb177d87a58ae18fe, duration=1927.889s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=10,icmp6,in_port=3,icmp_type=136 
> actions=resubmit(,24)
> cookie=0xb177d87a58ae18fe, duration=1927.886s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=10,arp,in_port=3 actions=resubmit(,24)
> cookie=0xb177d87a58ae18fe, duration=1927.893s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=9,in_port=3 actions=resubmit(,25)
> cookie=0xb177d87a58ae18fe, duration=1927.891s, table=24, n_packets=0, 
> n_bytes=0, idle_age=1927, 
> priority=2,icmp6,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:fed3:2116 
> actions=NORMAL
> cookie=0xb177d87a58ae18fe, duration=1927.888s, table=24, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=2,arp,in_port=3,arp_spa=192.168.100.13 
> actions=resubmit(,25)
> cookie=0xb177d87a58ae18fe, duration=1927.896s, table=25, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=2,in_port=3,dl_src=fa:16:3e:d3:21:16 
> actions=NORMAL
> 
> root@compute0:~# ovs-ofctl show br-int
> OFPT_FEATURES_REPLY (xid=0x2): dpid:caf6709a034a
> n_tables:254, n_buffers:256
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
> 1(int-br0): addr:7e:de:e2:7b:2a:83
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 2(patch-tun): addr:5a:2a:d3:f0:14:72
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 3(vhu2aae3928-3c): addr:00:00:00:00:00:00
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 
> Best Regards
> Xu Rongjie (Max)
> 



-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dpdk] how to link between guest vNIC and dpdkvhost socket

2017-10-25 Thread Flavio Leitner
On Wed, 25 Oct 2017 07:36:13 +
"Xu, Rongjie (NSB - CN/Hangzhou)"  wrote:

> Hi,
> 
> I have one OpenStack environment with OVS-DPDK (manually installed). I 
> success to launch two VMs on two separate compute node with vhostuser 
> interface type. But I cannot ping from one to another. I found there are even 
> no packets in the local 'br-int' bridge (where dpdkvhostuser interface is 
> attached). My guess is the link between guest vNIC and dpdkvhost socket at 
> host side is somehow broken, but I do not know how to proceed. Anyone any 
> suggestions?


Most probably you forgot to share the memory between host and guest.
in the VM xml:

E.g.:
  

  

  
...

  

...

fbl

> Compute0:
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "vhu2aae3928-3c"
> tag: 1
> Interface "vhu2aae3928-3c"
> type: dpdkvhostuser
> root@compute0:~# file /var/run/openvswitch/vhu2aae3928-3c
> /var/run/openvswitch/vhu2aae3928-3c: socket
> 
> Compute1:
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "vhuca25b420-6d"
> tag: 1
> Interface "vhuca25b420-6d"
>type: dpdkvhostuser
> root@compute1:~# file /var/run/openvswitch/vhuca25b420-6d
> /var/run/openvswitch/vhuca25b420-6d: socket
> 
> 
> When I ping, I even does not find ARP packets in the br-int flow tables.
> root@compute0:~# ovs-ofctl dump-flows br-int | grep "in_port=3"
> cookie=0xb177d87a58ae18fe, duration=1927.889s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=10,icmp6,in_port=3,icmp_type=136 
> actions=resubmit(,24)
> cookie=0xb177d87a58ae18fe, duration=1927.886s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=10,arp,in_port=3 actions=resubmit(,24)
> cookie=0xb177d87a58ae18fe, duration=1927.893s, table=0, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=9,in_port=3 actions=resubmit(,25)
> cookie=0xb177d87a58ae18fe, duration=1927.891s, table=24, n_packets=0, 
> n_bytes=0, idle_age=1927, 
> priority=2,icmp6,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:fed3:2116 
> actions=NORMAL
> cookie=0xb177d87a58ae18fe, duration=1927.888s, table=24, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=2,arp,in_port=3,arp_spa=192.168.100.13 
> actions=resubmit(,25)
> cookie=0xb177d87a58ae18fe, duration=1927.896s, table=25, n_packets=0, 
> n_bytes=0, idle_age=1927, priority=2,in_port=3,dl_src=fa:16:3e:d3:21:16 
> actions=NORMAL
> 
> root@compute0:~# ovs-ofctl show br-int
> OFPT_FEATURES_REPLY (xid=0x2): dpid:caf6709a034a
> n_tables:254, n_buffers:256
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
> 1(int-br0): addr:7e:de:e2:7b:2a:83
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 2(patch-tun): addr:5a:2a:d3:f0:14:72
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 3(vhu2aae3928-3c): addr:00:00:00:00:00:00
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> 
> Best Regards
> Xu Rongjie (Max)
> 



-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK QoS rate limit issue

2017-08-24 Thread 王志克
Hi Lance,

Your patch works. Thanks.

BR,
Wang Zhike

-Original Message-
From: Lance Richardson [mailto:lrich...@redhat.com] 
Sent: Thursday, August 24, 2017 8:10 PM
To: 王志克
Cc: ovs-...@openvswitch.org; ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] OVS+DPDK QoS rate limit issue


> From: "王志克" <wangzh...@jd.com>
> To: ovs-...@openvswitch.org, ovs-discuss@openvswitch.org
> Sent: Wednesday, August 23, 2017 11:41:05 PM
> Subject: [ovs-discuss] OVS+DPDK QoS rate limit issue
> 
> 
> 
> Hi All,
> 
> 
> 
> I am using OVS2.7.0 and DPDK 16.11, and testing rate limit function.
> 
> 
> 
> I found that if the policing_rate is set very large, say 5Gbps, the rate is
> limited dramatically to very low value, like 800Mbps.
> 
> The command is as below:
> 
> ovs-vsctl set interface port-7zel2so9sg ingress_policing_rate=500
> ingress_policing_burst=50
> 
> 
> 
> If we set the rate lower than 4Gbps, the rate is limited correctly.
> 
> 
> 
> Test setup:
> 
> Sender (DPDK pktGen) sends out about 10Gbps udp packet, with size about 1420
> IP size.
> 
> The rate limit is set on VM vhost-user-client port.
> 
> 
> 
> Any idea about this issue? Is that known issue?
> 
> 

It seems 32-bit arithmetic is being used when converting the rate from
kilobits per second to bytes per second. Could you give this patch a try?

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 1aaf6f7e2..d6ed2c7b0 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2229,8 +2229,8 @@ netdev_dpdk_policer_construct(uint32_t rate, uint32_t 
burst)
     rte_spinlock_init(>policer_lock);
 
     /* rte_meter requires bytes so convert kbits rate and burst to bytes. */
-    rate_bytes = rate * 1000/8;
-    burst_bytes = burst * 1000/8;
+    rate_bytes = rate * 1000ULL/8;
+    burst_bytes = burst * 1000ULL/8;
 
     policer->app_srtcm_params.cir = rate_bytes;
     policer->app_srtcm_params.cbs = burst_bytes;

Regards,

   Lance Richardson

> 
> Br,
> 
> Wang Zhike
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS+DPDK QoS rate limit issue

2017-08-24 Thread Lance Richardson
> From: "王志克" 
> To: ovs-...@openvswitch.org, ovs-discuss@openvswitch.org
> Sent: Wednesday, August 23, 2017 11:41:05 PM
> Subject: [ovs-discuss] OVS+DPDK QoS rate limit issue
> 
> 
> 
> Hi All,
> 
> 
> 
> I am using OVS2.7.0 and DPDK 16.11, and testing rate limit function.
> 
> 
> 
> I found that if the policing_rate is set very large, say 5Gbps, the rate is
> limited dramatically to very low value, like 800Mbps.
> 
> The command is as below:
> 
> ovs-vsctl set interface port-7zel2so9sg ingress_policing_rate=500
> ingress_policing_burst=50
> 
> 
> 
> If we set the rate lower than 4Gbps, the rate is limited correctly.
> 
> 
> 
> Test setup:
> 
> Sender (DPDK pktGen) sends out about 10Gbps udp packet, with size about 1420
> IP size.
> 
> The rate limit is set on VM vhost-user-client port.
> 
> 
> 
> Any idea about this issue? Is that known issue?
> 
> 

It seems 32-bit arithmetic is being used when converting the rate from
kilobits per second to bytes per second. Could you give this patch a try?

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 1aaf6f7e2..d6ed2c7b0 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2229,8 +2229,8 @@ netdev_dpdk_policer_construct(uint32_t rate, uint32_t 
burst)
     rte_spinlock_init(>policer_lock);
 
     /* rte_meter requires bytes so convert kbits rate and burst to bytes. */
-    rate_bytes = rate * 1000/8;
-    burst_bytes = burst * 1000/8;
+    rate_bytes = rate * 1000ULL/8;
+    burst_bytes = burst * 1000ULL/8;
 
     policer->app_srtcm_params.cir = rate_bytes;
     policer->app_srtcm_params.cbs = burst_bytes;

Regards,

   Lance Richardson

> 
> Br,
> 
> Wang Zhike
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-28 Thread Hui Xiang
On Fri, Jul 28, 2017 at 12:52 PM, Darrell Ball <db...@vmware.com> wrote:

>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 8:10 PM
>
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
>
>
> On Fri, Jul 28, 2017 at 10:54 AM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 6:59 PM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
>
>
> On Fri, Jul 28, 2017 at 1:12 AM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 3:18 AM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
> Blow is the diagram (using OVS-DPDK):
>
>
>
> 1. For packets coming to vm1 from internet where could have MTU 1500,
> there could be including some fragmented packets,
>
> how does the ALC/Security groups handle these fragmented packets? do
> nothing and pass it next which may pass the packets
>
> should be dropped or any special handling?
>
>
>
> Lets assume the fragments get thru. the physical switch and/or firewall.
>
>
>
> Are you using DPDK in GW and using OVS kernel datapath in br-int where you
> apply ACL/Security groups policy ?
>
> All are using DPDK, the ACL/Security groups policy said is OVS-DPDK
> conntrack implementation.
>
> With the case that we should have dropped some packets by creating special
> security group rules, but now due to they are fragmented and get thru by
> default, this is not what is expected.
>
>
>
> I would check your configuration.
>
> The dpdk connection tracker marks fragments as ‘invalid’ today and your
> rules should drop ‘invalid’.
>
> OK, thanks. here are the two scenarios we are discussing:
>
> 1.  For packets out from vms, use Jumbo Frame supported physical
> switches/routers within OpenStack cloud and enable it in all OVS-DPDK or do
> not allow application to send large frames.
>
>
>
> Try to use jumbo frames for performance reasons.
>
>
>
> On going out, if you get fragmentation done in HW at the physical
> switches, then
>
> 1)  If it could go back into one of your dpdk networks, then
> encourage using smaller packets
>
> 2)  If it goes somewhere else, then it does not matter, keep bigger
> packets
>
> Are you sure the physical switches do not support jumbo frames?
>
> Maybe it is just a config. change fix there.
>
>
>
Few physical switches in my lab probably just support max MTU 2000..

>
>
> 2. For packets coming from internet to OVS-DPDK, fragmented packets could
> be arrived, they are all dropped due to marks as 'invalid'.
>
>  With above analysis,  if these fragments are marked as 'invalid' and
> being dropped, the best way I can think about is to not use security group
> in OVS-DPDK if there could be fragments generated.
>
>
>
> If you already trust what gets to GW because you have a HW firewall, yes
>
> This assumes internally generated is always safe.
>
>
>
> Otherwise, you want to keep security groups and ‘encourage’ no fragments
> coming in, if you can
>
> ‘Encourage’ can be done by dropping and triggering checking why the
> fragments got generated in the first place
>
> Fragments may also indicate an exploit attempt, in which case, dropping is
> good.
>
Thanks Darrell, yep these are the solutions so far.

>
>
>
>
> Please correct me if I misunderstand anything.
>
>
>
> 2. For packets egress from vm1, if all internal physical switch support
> Jumbo Frame, that's fine, but if there are some physical swithes
>
> just support 1500/2000 MTU, then fragmented packets generated again.
> The ACL/Security groups face problem as item 1 as well.
>
>
>
>
>
> For packets that reach the physical switches on the way out, then the
> decision how to handle them is at the physical switch/router
>
> The packets may be fragmented at this point; depending on the switch;
> there will be HW firewall policies to contend with, so depends.
>
>
>
> Here, again what I mean is the packets are fragmen

Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Darrell Ball


From: Hui Xiang <xiangh...@gmail.com>
Date: Thursday, July 27, 2017 at 8:10 PM
To: Darrell Ball <db...@vmware.com>
Cc: "ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require



On Fri, Jul 28, 2017 at 10:54 AM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Thursday, July 27, 2017 at 6:59 PM
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require



On Fri, Jul 28, 2017 at 1:12 AM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Thursday, July 27, 2017 at 3:18 AM
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require


Blow is the diagram (using OVS-DPDK):

1. For packets coming to vm1 from internet where could have MTU 1500, there 
could be including some fragmented packets,
how does the ALC/Security groups handle these fragmented packets? do 
nothing and pass it next which may pass the packets
should be dropped or any special handling?

Lets assume the fragments get thru. the physical switch and/or firewall.

Are you using DPDK in GW and using OVS kernel datapath in br-int where you 
apply ACL/Security groups policy ?
All are using DPDK, the ACL/Security groups policy said is OVS-DPDK conntrack 
implementation.
With the case that we should have dropped some packets by creating special 
security group rules, but now due to they are fragmented and get thru by 
default, this is not what is expected.

I would check your configuration.
The dpdk connection tracker marks fragments as ‘invalid’ today and your rules 
should drop ‘invalid’.
OK, thanks. here are the two scenarios we are discussing:

1.  For packets out from vms, use Jumbo Frame supported physical 
switches/routers within OpenStack cloud and enable it in all OVS-DPDK or do not 
allow application to send large frames.

Try to use jumbo frames for performance reasons.

On going out, if you get fragmentation done in HW at the physical switches, then

1)  If it could go back into one of your dpdk networks, then encourage 
using smaller packets

2)  If it goes somewhere else, then it does not matter, keep bigger packets
Are you sure the physical switches do not support jumbo frames?

Maybe it is just a config. change fix there.


2. For packets coming from internet to OVS-DPDK, fragmented packets could be 
arrived, they are all dropped due to marks as 'invalid'.
 With above analysis,  if these fragments are marked as 'invalid' and being 
dropped, the best way I can think about is to not use security group in 
OVS-DPDK if there could be fragments generated.

If you already trust what gets to GW because you have a HW firewall, yes
This assumes internally generated is always safe.

Otherwise, you want to keep security groups and ‘encourage’ no fragments coming 
in, if you can
‘Encourage’ can be done by dropping and triggering checking why the fragments 
got generated in the first place
Fragments may also indicate an exploit attempt, in which case, dropping is good.


Please correct me if I misunderstand anything.

2. For packets egress from vm1, if all internal physical switch support Jumbo 
Frame, that's fine, but if there are some physical swithes
just support 1500/2000 MTU, then fragmented packets generated again. The 
ACL/Security groups face problem as item 1 as well.


For packets that reach the physical switches on the way out, then the decision 
how to handle them is at the physical switch/router
The packets may be fragmented at this point; depending on the switch; there 
will be HW firewall policies to contend with, so depends.

Here, again what I mean is the packets are fragmented by the physical 
switch/router, and they are switching/routing to a next node where has the 
OVS-DPDK set with security group, and OVS-DPDK may let them thru with ignoring 
the security group rules.

Sorry, you lost me a bit here; in point ‘2’ above you said packets are going 
from vm1 to internet and are fine until they hit the physical switch
Where you are assuming they are fragmented because the mtu is lower.
If this is not going to the internet but rather another set of nodes running 
dpdk, then this is another variation of ‘1’ and hence we don’t
need to discuss it.


[ine image 1]

On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball 
<

Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Hui Xiang
On Fri, Jul 28, 2017 at 10:54 AM, Darrell Ball <db...@vmware.com> wrote:

>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 6:59 PM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
>
>
> On Fri, Jul 28, 2017 at 1:12 AM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 3:18 AM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
> Blow is the diagram (using OVS-DPDK):
>
>
>
> 1. For packets coming to vm1 from internet where could have MTU 1500,
> there could be including some fragmented packets,
>
> how does the ALC/Security groups handle these fragmented packets? do
> nothing and pass it next which may pass the packets
>
> should be dropped or any special handling?
>
>
>
> Lets assume the fragments get thru. the physical switch and/or firewall.
>
>
>
> Are you using DPDK in GW and using OVS kernel datapath in br-int where you
> apply ACL/Security groups policy ?
>
> All are using DPDK, the ACL/Security groups policy said is OVS-DPDK
> conntrack implementation.
>
> With the case that we should have dropped some packets by creating special
> security group rules, but now due to they are fragmented and get thru by
> default, this is not what is expected.
>
>
>
> I would check your configuration.
>
> The dpdk connection tracker marks fragments as ‘invalid’ today and your
> rules should drop ‘invalid’.
>
OK, thanks. here are the two scenarios we are discussing:
1. For packets out from vms, use Jumbo Frame supported physical
switches/routers within OpenStack cloud and enable it in all OVS-DPDK or do
not allow application to send large frames.
2. For packets coming from internet to OVS-DPDK, fragmented packets could
be arrived, they are all dropped due to marks as 'invalid'.
 With above analysis,  if these fragments are marked as 'invalid' and being
dropped, the best way I can think about is to not use security group in
OVS-DPDK if there could be fragments generated.

Please correct me if I misunderstand anything.

>
>
> 2. For packets egress from vm1, if all internal physical switch support
> Jumbo Frame, that's fine, but if there are some physical swithes
>
> just support 1500/2000 MTU, then fragmented packets generated again.
> The ACL/Security groups face problem as item 1 as well.
>
>
>
>
>
> For packets that reach the physical switches on the way out, then the
> decision how to handle them is at the physical switch/router
>
> The packets may be fragmented at this point; depending on the switch;
> there will be HW firewall policies to contend with, so depends.
>
>
>
> Here, again what I mean is the packets are fragmented by the physical
> switch/router, and they are switching/routing to a next node where has the
> OVS-DPDK set with security group, and OVS-DPDK may let them thru with
> ignoring the security group rules.
>
>
>
> Sorry, you lost me a bit here; in point ‘2’ above you said packets are
> going from vm1 to internet and are fine until they hit the physical switch
>
> Where you are assuming they are fragmented because the mtu is lower.
>
> If this is not going to the internet but rather another set of nodes
> running dpdk, then this is another variation of ‘1’ and hence we don’t
>
> need to discuss it.
>
>
>
>
>
> [image: line image 1]
>
>
>
> On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 9:43 PM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Thanks Darrell, comment inline.
>
>
>
> On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *<ovs-discuss-boun...@openvswitch.org> on behalf of Hui Xiang <
> xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 7:47 PM
> *To: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *[ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Hi guys,
>
>
>
>   Seems OVS-DPDK still missing IP fragmenta

Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Darrell Ball


From: Hui Xiang <xiangh...@gmail.com>
Date: Thursday, July 27, 2017 at 6:59 PM
To: Darrell Ball <db...@vmware.com>
Cc: "ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require



On Fri, Jul 28, 2017 at 1:12 AM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Thursday, July 27, 2017 at 3:18 AM
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require


Blow is the diagram (using OVS-DPDK):

1. For packets coming to vm1 from internet where could have MTU 1500, there 
could be including some fragmented packets,
how does the ALC/Security groups handle these fragmented packets? do 
nothing and pass it next which may pass the packets
should be dropped or any special handling?

Lets assume the fragments get thru. the physical switch and/or firewall.

Are you using DPDK in GW and using OVS kernel datapath in br-int where you 
apply ACL/Security groups policy ?
All are using DPDK, the ACL/Security groups policy said is OVS-DPDK conntrack 
implementation.
With the case that we should have dropped some packets by creating special 
security group rules, but now due to they are fragmented and get thru by 
default, this is not what is expected.

I would check your configuration.
The dpdk connection tracker marks fragments as ‘invalid’ today and your rules 
should drop ‘invalid’.

2. For packets egress from vm1, if all internal physical switch support Jumbo 
Frame, that's fine, but if there are some physical swithes
just support 1500/2000 MTU, then fragmented packets generated again. The 
ACL/Security groups face problem as item 1 as well.


For packets that reach the physical switches on the way out, then the decision 
how to handle them is at the physical switch/router
The packets may be fragmented at this point; depending on the switch; there 
will be HW firewall policies to contend with, so depends.

Here, again what I mean is the packets are fragmented by the physical 
switch/router, and they are switching/routing to a next node where has the 
OVS-DPDK set with security group, and OVS-DPDK may let them thru with ignoring 
the security group rules.

Sorry, you lost me a bit here; in point ‘2’ above you said packets are going 
from vm1 to internet and are fine until they hit the physical switch
Where you are assuming they are fragmented because the mtu is lower.
If this is not going to the internet but rather another set of nodes running 
dpdk, then this is another variation of ‘1’ and hence we don’t
need to discuss it.


[line image 1]

On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Wednesday, July 26, 2017 at 9:43 PM
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require

Thanks Darrell, comment inline.

On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: 
<ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org>>
 on behalf of Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Wednesday, July 26, 2017 at 7:47 PM
To: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: [ovs-discuss] OVS-DPDK IP fragmentation require

Hi guys,

  Seems OVS-DPDK still missing IP fragmentation support, is there any schedule 
to have it?
OVS 2.9
I'm  transferring to use OVN, but for those nodes which have external network 
connection, they may face this problem,
except to configure Jumbo frames, is there any other workaround?

I am not clear on the situation however.
You mention about configuring jumbo frames which means you can avoid the 
fragments by doing this ?
No, I can't guarantee that, only can do it inside OpenStack, it is limited.
If this is true, then this is the best way to proceed since performance will be 
better.
What is wrong with jumbo frames ?
It's good but it's limited can't be guaranteed, so I am asking is there any 
other way without IP fragmentation so far.

It sounds like you want to avoid IP fragmentation; so far so good.
I am not sure I understand the whole picture though.
Maybe you can describe what you see ?; maybe a simple diagram would help ?


BR.
Hui.



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Hui Xiang
On Fri, Jul 28, 2017 at 1:12 AM, Darrell Ball <db...@vmware.com> wrote:

>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Thursday, July 27, 2017 at 3:18 AM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
>
>
> Blow is the diagram (using OVS-DPDK):
>
>
>
> 1. For packets coming to vm1 from internet where could have MTU 1500,
> there could be including some fragmented packets,
>
> how does the ALC/Security groups handle these fragmented packets? do
> nothing and pass it next which may pass the packets
>
> should be dropped or any special handling?
>
>
>
> Lets assume the fragments get thru. the physical switch and/or firewall.
>
>
>
> Are you using DPDK in GW and using OVS kernel datapath in br-int where you
> apply ACL/Security groups policy ?
>
All are using DPDK, the ACL/Security groups policy said is OVS-DPDK
conntrack implementation.
With the case that we should have dropped some packets by creating special
security group rules, but now due to they are fragmented and get thru by
default, this is not what is expected.

>
>
> 2. For packets egress from vm1, if all internal physical switch support
> Jumbo Frame, that's fine, but if there are some physical swithes
>
> just support 1500/2000 MTU, then fragmented packets generated again.
> The ACL/Security groups face problem as item 1 as well.
>
>
>
>
>
> For packets that reach the physical switches on the way out, then the
> decision how to handle them is at the physical switch/router
>
> The packets may be fragmented at this point; depending on the switch;
> there will be HW firewall policies to contend with, so depends.
>
>
>
Here, again what I mean is the packets are fragmented by the physical
switch/router, and they are switching/routing to a next node where has the
OVS-DPDK set with security group, and OVS-DPDK may let them thru with
ignoring the security group rules.

>
>
>
>
> [image: nline image 1]
>
>
>
> On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 9:43 PM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Thanks Darrell, comment inline.
>
>
>
> On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *<ovs-discuss-boun...@openvswitch.org> on behalf of Hui Xiang <
> xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 7:47 PM
> *To: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *[ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Hi guys,
>
>
>
>   Seems OVS-DPDK still missing IP fragmentation support, is there any
> schedule to have it?
>
> OVS 2.9
>
> I'm  transferring to use OVN, but for those nodes which have external
> network connection, they may face this problem,
>
> except to configure Jumbo frames, is there any other workaround?
>
>
>
> I am not clear on the situation however.
>
> You mention about configuring jumbo frames which means you can avoid the
> fragments by doing this ?
>
> No, I can't guarantee that, only can do it inside OpenStack, it is
> limited.
>
> If this is true, then this is the best way to proceed since performance
> will be better.
>
> What is wrong with jumbo frames ?
>
> It's good but it's limited can't be guaranteed, so I am asking is there
> any other way without IP fragmentation so far.
>
>
>
> It sounds like you want to avoid IP fragmentation; so far so good.
>
> I am not sure I understand the whole picture though.
>
> Maybe you can describe what you see ?; maybe a simple diagram would help ?
>
>
>
>
>
> BR.
>
> Hui.
>
>
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Darrell Ball


From: Hui Xiang <xiangh...@gmail.com>
Date: Thursday, July 27, 2017 at 3:18 AM
To: Darrell Ball <db...@vmware.com>
Cc: "ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require


Blow is the diagram (using OVS-DPDK):

1. For packets coming to vm1 from internet where could have MTU 1500, there 
could be including some fragmented packets,
how does the ALC/Security groups handle these fragmented packets? do 
nothing and pass it next which may pass the packets
should be dropped or any special handling?

Lets assume the fragments get thru. the physical switch and/or firewall.

Are you using DPDK in GW and using OVS kernel datapath in br-int where you 
apply ACL/Security groups policy ?

2. For packets egress from vm1, if all internal physical switch support Jumbo 
Frame, that's fine, but if there are some physical swithes
just support 1500/2000 MTU, then fragmented packets generated again. The 
ACL/Security groups face problem as item 1 as well.


For packets that reach the physical switches on the way out, then the decision 
how to handle them is at the physical switch/router
The packets may be fragmented at this point; depending on the switch; there 
will be HW firewall policies to contend with, so depends.



[nline image 1]

On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Wednesday, July 26, 2017 at 9:43 PM
To: Darrell Ball <db...@vmware.com<mailto:db...@vmware.com>>
Cc: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require

Thanks Darrell, comment inline.

On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: 
<ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org>>
 on behalf of Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Wednesday, July 26, 2017 at 7:47 PM
To: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: [ovs-discuss] OVS-DPDK IP fragmentation require

Hi guys,

  Seems OVS-DPDK still missing IP fragmentation support, is there any schedule 
to have it?
OVS 2.9
I'm  transferring to use OVN, but for those nodes which have external network 
connection, they may face this problem,
except to configure Jumbo frames, is there any other workaround?

I am not clear on the situation however.
You mention about configuring jumbo frames which means you can avoid the 
fragments by doing this ?
No, I can't guarantee that, only can do it inside OpenStack, it is limited.
If this is true, then this is the best way to proceed since performance will be 
better.
What is wrong with jumbo frames ?
It's good but it's limited can't be guaranteed, so I am asking is there any 
other way without IP fragmentation so far.

It sounds like you want to avoid IP fragmentation; so far so good.
I am not sure I understand the whole picture though.
Maybe you can describe what you see ?; maybe a simple diagram would help ?


BR.
Hui.


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Hui Xiang
Blow is the diagram (using OVS-DPDK):

1. For packets coming to vm1 from internet where could have MTU 1500, there
could be including some fragmented packets,
how does the ALC/Security groups handle these fragmented packets? do
nothing and pass it next which may pass the packets
should be dropped or any special handling?
2. For packets egress from vm1, if all internal physical switch support
Jumbo Frame, that's fine, but if there are some physical swithes
just support 1500/2000 MTU, then fragmented packets generated again.
The ACL/Security groups face problem as item 1 as well.

[image: Inline image 1]

On Thu, Jul 27, 2017 at 2:49 PM, Darrell Ball <db...@vmware.com> wrote:

>
>
>
>
> *From: *Hui Xiang <xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 9:43 PM
> *To: *Darrell Ball <db...@vmware.com>
> *Cc: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *Re: [ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Thanks Darrell, comment inline.
>
>
>
> On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball <db...@vmware.com> wrote:
>
>
>
>
>
> *From: *<ovs-discuss-boun...@openvswitch.org> on behalf of Hui Xiang <
> xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 7:47 PM
> *To: *"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
> *Subject: *[ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Hi guys,
>
>
>
>   Seems OVS-DPDK still missing IP fragmentation support, is there any
> schedule to have it?
>
> OVS 2.9
>
> I'm  transferring to use OVN, but for those nodes which have external
> network connection, they may face this problem,
>
> except to configure Jumbo frames, is there any other workaround?
>
>
>
> I am not clear on the situation however.
>
> You mention about configuring jumbo frames which means you can avoid the
> fragments by doing this ?
>
> No, I can't guarantee that, only can do it inside OpenStack, it is
> limited.
>
> If this is true, then this is the best way to proceed since performance
> will be better.
>
> What is wrong with jumbo frames ?
>
> It's good but it's limited can't be guaranteed, so I am asking is there
> any other way without IP fragmentation so far.
>
>
>
> It sounds like you want to avoid IP fragmentation; so far so good.
>
> I am not sure I understand the whole picture though.
>
> Maybe you can describe what you see ?; maybe a simple diagram would help ?
>
>
>
>
>
> BR.
>
> Hui.
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-27 Thread Darrell Ball


From: Hui Xiang <xiangh...@gmail.com>
Date: Wednesday, July 26, 2017 at 9:43 PM
To: Darrell Ball <db...@vmware.com>
Cc: "ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Subject: Re: [ovs-discuss] OVS-DPDK IP fragmentation require

Thanks Darrell, comment inline.

On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball 
<db...@vmware.com<mailto:db...@vmware.com>> wrote:


From: 
<ovs-discuss-boun...@openvswitch.org<mailto:ovs-discuss-boun...@openvswitch.org>>
 on behalf of Hui Xiang <xiangh...@gmail.com<mailto:xiangh...@gmail.com>>
Date: Wednesday, July 26, 2017 at 7:47 PM
To: "ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>" 
<ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>>
Subject: [ovs-discuss] OVS-DPDK IP fragmentation require

Hi guys,

  Seems OVS-DPDK still missing IP fragmentation support, is there any schedule 
to have it?
OVS 2.9
I'm  transferring to use OVN, but for those nodes which have external network 
connection, they may face this problem,
except to configure Jumbo frames, is there any other workaround?

I am not clear on the situation however.
You mention about configuring jumbo frames which means you can avoid the 
fragments by doing this ?
No, I can't guarantee that, only can do it inside OpenStack, it is limited.
If this is true, then this is the best way to proceed since performance will be 
better.
What is wrong with jumbo frames ?
It's good but it's limited can't be guaranteed, so I am asking is there any 
other way without IP fragmentation so far.

It sounds like you want to avoid IP fragmentation; so far so good.
I am not sure I understand the whole picture though.
Maybe you can describe what you see ?; maybe a simple diagram would help ?


BR.
Hui.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-26 Thread Hui Xiang
Thanks Darrell, comment inline.

On Thu, Jul 27, 2017 at 12:08 PM, Darrell Ball  wrote:

>
>
>
>
> *From: * on behalf of Hui Xiang <
> xiangh...@gmail.com>
> *Date: *Wednesday, July 26, 2017 at 7:47 PM
> *To: *"ovs-discuss@openvswitch.org" 
> *Subject: *[ovs-discuss] OVS-DPDK IP fragmentation require
>
>
>
> Hi guys,
>
>
>
>   Seems OVS-DPDK still missing IP fragmentation support, is there any
> schedule to have it?
>
> OVS 2.9
>
> I'm  transferring to use OVN, but for those nodes which have external
> network connection, they may face this problem,
>
> except to configure Jumbo frames, is there any other workaround?
>
>
>
> I am not clear on the situation however.
>
> You mention about configuring jumbo frames which means you can avoid the
> fragments by doing this ?
>
No, I can't guarantee that, only can do it inside OpenStack, it is limited.

> If this is true, then this is the best way to proceed since performance
> will be better.
>
> What is wrong with jumbo frames ?
>
It's good but it's limited can't be guaranteed, so I am asking is there any
other way without IP fragmentation so far.

>
>
>
>
> BR.
>
> Hui.
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK IP fragmentation require

2017-07-26 Thread Darrell Ball


From:  on behalf of Hui Xiang 

Date: Wednesday, July 26, 2017 at 7:47 PM
To: "ovs-discuss@openvswitch.org" 
Subject: [ovs-discuss] OVS-DPDK IP fragmentation require

Hi guys,

  Seems OVS-DPDK still missing IP fragmentation support, is there any schedule 
to have it?
OVS 2.9
I'm  transferring to use OVN, but for those nodes which have external network 
connection, they may face this problem,
except to configure Jumbo frames, is there any other workaround?

I am not clear on the situation however.
You mention about configuring jumbo frames which means you can avoid the 
fragments by doing this ?
If this is true, then this is the best way to proceed since performance will be 
better.
What is wrong with jumbo frames ?


BR.
Hui.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK - TAP0 - vdev (af_packet) device is linkdown

2017-06-20 Thread Avi Cohen (A)
Update 
I see in log file  the following error msgs ..

libvirtd[1734]: Failed to open file '/sys/class/net/tap0/operstate': No such 
file or directory
libvirtd[1734]: unable to read: /sys/class/net/tap0/operstate: No such file or 
directory
kernel: [   38.948590] IPv6: ADDRCONF(NETDEV_UP): tap0: link is not ready


> -Original Message-
> From: Avi Cohen (A)
> Sent: Tuesday, 20 June, 2017 12:41 PM
> To: 'ovs-discuss@openvswitch.org'; 'us...@dpdk.org'
> Subject: OVS-DPDK - TAP0 - vdev (af_packet) device is linkdown
> 
> Hello All,
> 
> I did upgrade to dpdk-17.05 and ovs-2-7.1 and create a vdev - af_packet
> device according the config script below This tap device connects the ovs-
> dpdk to a name space The problem is that this tap00 device is linkdown -
> (never reach  RUNNING state )  - hence no pkt is received transmitted on this
> interface  although the tap0 creation in ovs is OK, I'm attaching  the config
> script and output of netstat and ifconfig in the namespace Can someone
> please tell me how to activate the tap0 ?
> 
> Config script :
> 
> 
> cd ../dpdk-17.05/usertools/
> modprobe uio
> insmod ../x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
> ifconfig ens3 down
> 
> ./dpdk-devbind.py --bind=igb_uio :00:03.0 ./dpdk-devbind.py --status
> 
> cd -
> 
> pkill -9 ovs
> rm -rf /usr/local/var/run/openvswitch
> rm -rf /usr/local/etc/openvswitch/
> rm -f /usr/local/etc/openvswitch/conf.db
> mkdir -p /usr/local/etc/openvswitch
> mkdir -p /usr/local/var/run/openvswitch
> 
> 
> ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db
> ./vswitchd/vswitch.ovsschema ./ovsdb/ovsdb-server --
> remote=punix:/usr/local/var/run/openvswitch/db.sock --
> remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
> ./utilities/ovs-vsctl --no-wait init
> 
> echo 8192 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> echo 8192 > /sys/devices/system/node/node1/hugepages/hugepages-
> 2048kB/nr_hugepages
> 
> mkdir -p /mnt/huge
> mkdir -p /mnt/huge_2mb
> mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
> 
> modprobe openvswitch
> export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
> 
> ./utilities/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> ./vswitchd/ovs-vswitchd unix:$DB_SOCK --pidfile --detach ./utilities/ovs-vsctl
> set Open_vSwitch . other_config:pmd-cpu-mask=0x80
> 
> 
> 
> utilities/ovs-vsctl --may-exist add-br br-int \
>   -- set Bridge br-int datapath_type=netdev \
>   -- br-set-external-id br-int bridge-id br-int \
>   -- set bridge br-int fail-mode=standalone
> 
> 
> 
> ip tuntap add dev tap0 mode tap
> 
> ovs-vsctl add-port br-int tap0 -- set Interface tap0 type=dpdk \
> options:dpdk-devargs=eth_af_packet0,iface=tap0
> 
> 
> 
> ip netns add red
> ip link set tap0 netns red
> ip netns exec red ip addr add 1.1.1.20/24 dev tap0 ip netns exec red ip link 
> set
> tap0 up
> 
> 
> 
> 
> utilities/ovs-vsctl --may-exist add-port br-int vxlan0 \
>   -- set interface vxlan0 type=vxlan options:remote_ip=172.31.100.44
> options:key=1000
> 
> 
> 
> 
> utilities/ovs-vsctl --may-exist add-br br-phy \
> -- set Bridge br-phy datapath_type=netdev \
> -- br-set-external-id br-phy bridge-id br-phy \
> -- set bridge br-phy fail-mode=standalone \
>other_config:hwaddr=02:d7:d1:26:84:e5
> 
> 
> utilities/ovs-vsctl --timeout 10 --may-exist add-port br-phy dpdk0 -- set
> Interface dpdk0 type=dpdk options:dpdk-devargs=:00:03.0
> 
> 
> 
> 
> ip addr add 172.31.100.80/24 dev br-phy
> 
> 
> 
> ip link set br-phy up
> ip link set br-int up
> iptables -F
> 
> 
> utilities/ovs-appctl ovs/route/show
> 
> 
> ip netns exec red bash
> ifconfig lo up
> 
> #
> netstat -v -s -e
> Ip:
> 7 total packets received
> 0 forwarded
> 0 incoming packets discarded
> 7 incoming packets delivered
> 14 requests sent out
> Icmp:
> 7 ICMP messages received
> 0 input ICMP message failed.
> ICMP input histogram:
> destination unreachable: 7
> 14 ICMP messages sent
> 0 ICMP messages failed
> ICMP output histogram:
> destination unreachable: 7
> echo request: 7
> IcmpMsg:
> InType3: 7
> OutType3: 7
> OutType8: 7
> Tcp:
> 0 active connections openings
> 0 passive connection openings
> 0 failed connection attempts
> 0 connection resets received
> 0 connections established
> 0 segments received
> 0 segments send out
> 0 segments retransmited
> 0 bad segments received.
> 0 resets sent
> Udp:
> 0 packets received
> 0 packets to unknown port received.
> 0 packet receive errors
> 0 packets sent
> UdpLite:
> TcpExt:
> 0 packet headers predicted
> IpExt:
> InOctets: 784
> OutOctets: 1372
> InNoECTPkts: 7
> root@ip-172-31-100-80:/home/ubuntu/openvswitch-2.7.0# ifconfig
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 

Re: [ovs-discuss] OVS-DPDK - af_packet vdev configuration - Error

2017-06-19 Thread Avi Cohen (A)
OK - solved
The problem was with the  device MTU which is set in the af_packet to 1518, 
In my port creation I set its MTU to 9000
I've changed the MTU in the af_packet driver to 9000

> -Original Message-
> From: Avi Cohen (A)
> Sent: Sunday, 18 June, 2017 5:45 PM
> To: 'ovs-discuss@openvswitch.org'; 'us...@dpdk.org'
> Subject: RE: OVS-DPDK - af_packet vdev configuration - Error
> 
> I  made  upgrade to ovs2.7.0  and dpdk17.02.1 And got  these error msgs
> when I run
>  ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
>   options:dpdk-devargs=eth_af_packet0,iface=tap1
> 
> Jun 18 14:36:29 ip-172-31-100-80 ovs-vswitchd[54914]:
> ovs|00047|netdev_dpdk|INFO|Device 'eth_af_packet0,iface=tap1' attached
> to DPDK
> 
> Jun 18 14:36:29 ip-172-31-100-80 ovs-vswitchd[54914]:
> ovs|00050|netdev_dpdk|WARN|Interface tap1 eth_dev setup error Invalid
> argument
> 
> Jun 18 14:36:29 ip-172-31-100-80 ovs-vswitchd[54914]:
> ovs|00051|netdev_dpdk|ERR|Interface tap1(rxq:1 txq:1) configure error:
> Invalid argument
> 
> Jun 18 14:36:29 ip-172-31-100-80 ovs-vswitchd[54914]:
> ovs|00052|dpif_netdev|ERR|Failed to set interface tap1 new configuration
> 
> Jun 18 14:36:29 ip-172-31-100-80 ovs-vswitchd[54914]:
> ovs|00053|bridge|WARN|could not add network device tap1 to ofproto (No
> such device)
> 
> Can someone tell what is wrong with my cli command ?
> Best Regards
> avi
> 
> > -Original Message-
> > From: Avi Cohen (A)
> > Sent: Sunday, 18 June, 2017 1:59 PM
> > To: ovs-discuss@openvswitch.org; us...@dpdk.org
> > Subject: OVS-DPDK - af_packet vdev configuration - Error
> >
> > Hi All,
> >
> > I have ovs 2.6.1 with dpdk-stable-16.07.2 ,  I'm trying to create  an
> > af_packet vdev interface with the following commands:
> >
> > 1.ip tuntap add dev tap1 mode tap
> >
> >2.ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
> >   options:dpdk-devargs=eth_af_packet0,iface=tap1
> >
> > but I get an error message [could not open network device tap1 - No
> > such device ]
> >
> > can you assist  how to set this interface ?
> >
> > Best Regards
> > avi
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-16 Thread Darrell Ball
This should be quite a bit better than the AF_PACKET PMD. 
This becomes available in 17.08 and is important, it would be nice to get into 
the next OVS release.

Also, any existing data on the performance advantage of AF_PACKET PMD with 
single queue ?

Darrell


On 6/16/17, 1:56 AM, "ovs-discuss-boun...@openvswitch.org on behalf of Gray, 
Mark D"  wrote:

Hi Avi,

The other option is to use virtio-use 
(https://urldefense.proofpoint.com/v2/url?u=http-3A__dpdk.org_doc_guides_howto_virtio-5Fuser-5Ffor-5Fcontainer-5Fnetworking.html=DwICAg=uilaK90D4TOVoH58JNXRgQ=BVhFA09CGX7JQ5Ih-uZnsw=_7l_cdKorhow4zRqAu5lnTmzv9Etgn5TX7D6P0pqP8c=ThmUVGZC2M1LBAIhnvs-5OiSZz7ywpez2Qj70BrQjoM=
 ) which gives dpdk-like performance to a dpdk application running in a 
container. The configuration for ovs-dpdk has not been documented but it is 
possible to use (as a vdev).

Also, I have dropped the dpdk-...@lists.01.org mail address as this is for 
the discontinued ovdk project.

Mark

> -Original Message-
> From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> Mooney, Sean K
> Sent: Thursday, June 15, 2017 12:33 PM
> To: Avi Cohen (A) ; dpdk-...@lists.01.org;
> us...@dpdk.org; ovs-discuss@openvswitch.org
> Subject: Re: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> connected to namespace/container
> 
> 
> 
> > -Original Message-
> > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > Sent: Thursday, June 15, 2017 9:50 AM
> > To: Mooney, Sean K ; dpdk-...@lists.01.org;
> > us...@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -Original Message-
> > > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> > > Sent: Thursday, 15 June, 2017 11:24 AM
> > > To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> > > disc...@openvswitch.org
> > > Cc: Mooney, Sean K
> > > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > > namespace/container
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> > > > Avi Cohen (A)
> > > > Sent: Thursday, June 15, 2017 8:14 AM
> > > > To: dpdk-...@lists.01.org; us...@dpdk.org;
> > > > ovs-discuss@openvswitch.org
> > > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> > > > connected to namespace/container
> > > >
> > > > Hello   All,
> > > > I have OVS-DPDK connected to a namespace via veth pair device.
> > > >
> > > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > > DPDK).
> > > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> > OVS-
> > > > DPDK 1.7 Gbps.
> > > >
> > > > This can be explained as follows:
> > > > veth is implemented in kernel - in OVS-DPDK data is transferred
> > from
> > > > veth to user space while in normal OVS we save this transfer
> > > [Mooney, Sean K] that is part of the reason, the other reson this is
> > > slow and The main limiter to scalling adding veth pairs or ovs
> > > internal port to ovs with dpdk is That these linux kernel ports are
> > > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > > main thread via a fall back to the non dpdk acclarated netdev
> > implementation.
> > > >
> > > > Is there any other device to connect to namespace ? something like
> > > > vhost-user ? I understand that vhost-user cannot be used for
> > > > namespace
> > > [Mooney, Sean K] I have been doing some experiments in this regard.
> > > You should be able to use the tap, pcap or afpacket pmd to add a
> > > vedv that will improve Performance. I have seen some strange issue
> > > with
> > the
> > > tap pmd that cause packet to be drop By the kernel on tx on some
> > ports
> > > but not others so there may be issues with that dirver.
> > >
> > > Previous experiment with libpcap seemed to work well with ovs 2.5
> > > but I have not tried it With ovs 2.7/master since the introduction
> > > of generic vdev support at runtime. Previously vdevs And to be
> > > allocated
> > using the dpdk args.
> > >
> > > I would try following the af_packet example here
> > >
> >
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openvswitch_ovs_blob_b132189d8456f38f3ee139f126d68=DwICAg=uilaK90D4TOVoH58JNXRgQ=BVhFA09CGX7JQ5Ih-uZnsw=_7l_cdKorhow4zRqAu5lnTmzv9Etgn5TX7D6P0pqP8c=REyqxTB8Gd9BnEtetH_Aul0OgyyGK0DFhKl3tFGzOGI=
 
> 0
> > > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> > >

Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-16 Thread Gray, Mark D
Hi Avi,

The other option is to use virtio-use 
(http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.html) 
which gives dpdk-like performance to a dpdk application running in a container. 
The configuration for ovs-dpdk has not been documented but it is possible to 
use (as a vdev).

Also, I have dropped the dpdk-...@lists.01.org mail address as this is for the 
discontinued ovdk project.

Mark

> -Original Message-
> From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> Mooney, Sean K
> Sent: Thursday, June 15, 2017 12:33 PM
> To: Avi Cohen (A) ; dpdk-...@lists.01.org;
> us...@dpdk.org; ovs-discuss@openvswitch.org
> Subject: Re: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> connected to namespace/container
> 
> 
> 
> > -Original Message-
> > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > Sent: Thursday, June 15, 2017 9:50 AM
> > To: Mooney, Sean K ; dpdk-...@lists.01.org;
> > us...@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -Original Message-
> > > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> > > Sent: Thursday, 15 June, 2017 11:24 AM
> > > To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> > > disc...@openvswitch.org
> > > Cc: Mooney, Sean K
> > > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > > namespace/container
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> > > > Avi Cohen (A)
> > > > Sent: Thursday, June 15, 2017 8:14 AM
> > > > To: dpdk-...@lists.01.org; us...@dpdk.org;
> > > > ovs-discuss@openvswitch.org
> > > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> > > > connected to namespace/container
> > > >
> > > > Hello   All,
> > > > I have OVS-DPDK connected to a namespace via veth pair device.
> > > >
> > > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > > DPDK).
> > > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> > OVS-
> > > > DPDK 1.7 Gbps.
> > > >
> > > > This can be explained as follows:
> > > > veth is implemented in kernel - in OVS-DPDK data is transferred
> > from
> > > > veth to user space while in normal OVS we save this transfer
> > > [Mooney, Sean K] that is part of the reason, the other reson this is
> > > slow and The main limiter to scalling adding veth pairs or ovs
> > > internal port to ovs with dpdk is That these linux kernel ports are
> > > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > > main thread via a fall back to the non dpdk acclarated netdev
> > implementation.
> > > >
> > > > Is there any other device to connect to namespace ? something like
> > > > vhost-user ? I understand that vhost-user cannot be used for
> > > > namespace
> > > [Mooney, Sean K] I have been doing some experiments in this regard.
> > > You should be able to use the tap, pcap or afpacket pmd to add a
> > > vedv that will improve Performance. I have seen some strange issue
> > > with
> > the
> > > tap pmd that cause packet to be drop By the kernel on tx on some
> > ports
> > > but not others so there may be issues with that dirver.
> > >
> > > Previous experiment with libpcap seemed to work well with ovs 2.5
> > > but I have not tried it With ovs 2.7/master since the introduction
> > > of generic vdev support at runtime. Previously vdevs And to be
> > > allocated
> > using the dpdk args.
> > >
> > > I would try following the af_packet example here
> > >
> >
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d68
> 0
> > > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> > >
> > [Avi Cohen (A)]
> > Thank you Mooney, Sean K
> > I already tried to connect the namespace with a tap device (see 1 & 2
> > below)  - and got the worst performance  for some reason the packet
> > is cut to default MTU inside the  OVS-DPDK which transmit the packet
> > to its peer. - although all interfaces MTU were set to 9000.
> >
> >  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1
> > type=internal
> >
> >  2. ip link set tap1 netns ns1 // attach it to namespace
> [Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth
> ports If added to ovs will not be accelerated by dpdk unless you use a vdev to
> attach them.
> >
> > I'm looking at your link to create a virtual PMD with vdev support - I
> > see there a creation of a virtual PMD device , but I'm not sure how
> > this is connected to the namespace ?  what device should I assign to
> > the namespace ?
> [Mooney, Sean K]
> You would use it as follows
> 
> ip tuntap add dev tap1 mode tap
> 
> ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
> options:dpdk-devargs=eth_af_packet0,iface=tap1
> 
> ip link set tap1 netns ns1
> 
> ip netns exec ns1 ifconfig 192.168.1.1/24 up
> 
> in 

Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-15 Thread Avi Cohen (A)


> -Original Message-
> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> Sent: Thursday, 15 June, 2017 2:33 PM
> To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> disc...@openvswitch.org
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -Original Message-
> > From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> > Sent: Thursday, June 15, 2017 9:50 AM
> > To: Mooney, Sean K ; dpdk-...@lists.01.org;
> > us...@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -Original Message-
> > > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> > > Sent: Thursday, 15 June, 2017 11:24 AM
> > > To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> > > disc...@openvswitch.org
> > > Cc: Mooney, Sean K
> > > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > > namespace/container
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> > > > Avi Cohen (A)
> > > > Sent: Thursday, June 15, 2017 8:14 AM
> > > > To: dpdk-...@lists.01.org; us...@dpdk.org;
> > > > ovs-discuss@openvswitch.org
> > > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when
> > > > connected to namespace/container
> > > >
> > > > Hello   All,
> > > > I have OVS-DPDK connected to a namespace via veth pair device.
> > > >
> > > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > > DPDK).
> > > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> > OVS-
> > > > DPDK 1.7 Gbps.
> > > >
> > > > This can be explained as follows:
> > > > veth is implemented in kernel - in OVS-DPDK data is transferred
> > from
> > > > veth to user space while in normal OVS we save this transfer
> > > [Mooney, Sean K] that is part of the reason, the other reson this is
> > > slow and The main limiter to scalling adding veth pairs or ovs
> > > internal port to ovs with dpdk is That these linux kernel ports are
> > > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > > main thread via a fall back to the non dpdk acclarated netdev
> > implementation.
> > > >
> > > > Is there any other device to connect to namespace ? something like
> > > > vhost-user ? I understand that vhost-user cannot be used for
> > > > namespace
> > > [Mooney, Sean K] I have been doing some experiments in this regard.
> > > You should be able to use the tap, pcap or afpacket pmd to add a
> > > vedv that will improve Performance. I have seen some strange issue
> > > with
> > the
> > > tap pmd that cause packet to be drop By the kernel on tx on some
> > ports
> > > but not others so there may be issues with that dirver.
> > >
> > > Previous experiment with libpcap seemed to work well with ovs 2.5
> > > but I have not tried it With ovs 2.7/master since the introduction
> > > of generic vdev support at runtime. Previously vdevs And to be
> > > allocated
> > using the dpdk args.
> > >
> > > I would try following the af_packet example here
> > >
> > https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680
> > > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> > >
> > [Avi Cohen (A)]
> > Thank you Mooney, Sean K
> > I already tried to connect the namespace with a tap device (see 1 & 2
> > below)  - and got the worst performance  for some reason the packet
> > is cut to default MTU inside the  OVS-DPDK which transmit the packet
> > to its peer. - although all interfaces MTU were set to 9000.
> >
> >  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1
> > type=internal
> >
> >  2. ip link set tap1 netns ns1 // attach it to namespace
> [Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth
> ports If added to ovs will not be accelerated by dpdk unless you use a vdev to
> attach them.
> >
> > I'm looking at your link to create a virtual PMD with vdev support - I
> > see there a creation of a virtual PMD device , but I'm not sure how
> > this is connected to the namespace ?  what device should I assign to
> > the namespace ?
> [Mooney, Sean K]
> You would use it as follows
> 
> ip tuntap add dev tap1 mode tap
> 
> ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
> options:dpdk-devargs=eth_af_packet0,iface=tap1
[Avi Cohen (A)] 
Thanks Sean - are u sure about the syntax - I get an error msg  [could not open 
network device tap1 - No such device]  - when I add-port
The syntax in your link is different  - note there is myeth0 and eth0 while in 
your command only tap1 
The command in the link is as follows:
" ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=dpdk \
options:dpdk-devargs=eth_af_packet0,iface=eth0"

> 
> ip link set tap1 netns ns1
> 
> ip netns exec ns1 ifconfig 192.168.1.1/24 up
> 
> in general though if you are using ovs-dpdk you should avoid using 

Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-15 Thread Mooney, Sean K


> -Original Message-
> From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> Sent: Thursday, June 15, 2017 9:50 AM
> To: Mooney, Sean K ; dpdk-...@lists.01.org;
> us...@dpdk.org; ovs-discuss@openvswitch.org
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -Original Message-
> > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> > Sent: Thursday, 15 June, 2017 11:24 AM
> > To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> > disc...@openvswitch.org
> > Cc: Mooney, Sean K
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -Original Message-
> > > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of
> > > Avi Cohen (A)
> > > Sent: Thursday, June 15, 2017 8:14 AM
> > > To: dpdk-...@lists.01.org; us...@dpdk.org;
> > > ovs-discuss@openvswitch.org
> > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected
> > > to namespace/container
> > >
> > > Hello   All,
> > > I have OVS-DPDK connected to a namespace via veth pair device.
> > >
> > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > DPDK).
> > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> OVS-
> > > DPDK 1.7 Gbps.
> > >
> > > This can be explained as follows:
> > > veth is implemented in kernel - in OVS-DPDK data is transferred
> from
> > > veth to user space while in normal OVS we save this transfer
> > [Mooney, Sean K] that is part of the reason, the other reson this is
> > slow and The main limiter to scalling adding veth pairs or ovs
> > internal port to ovs with dpdk is That these linux kernel ports are
> > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > main thread via a fall back to the non dpdk acclarated netdev
> implementation.
> > >
> > > Is there any other device to connect to namespace ? something like
> > > vhost-user ? I understand that vhost-user cannot be used for
> > > namespace
> > [Mooney, Sean K] I have been doing some experiments in this regard.
> > You should be able to use the tap, pcap or afpacket pmd to add a vedv
> > that will improve Performance. I have seen some strange issue with
> the
> > tap pmd that cause packet to be drop By the kernel on tx on some
> ports
> > but not others so there may be issues with that dirver.
> >
> > Previous experiment with libpcap seemed to work well with ovs 2.5 but
> > I have not tried it With ovs 2.7/master since the introduction of
> > generic vdev support at runtime. Previously vdevs And to be allocated
> using the dpdk args.
> >
> > I would try following the af_packet example here
> >
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680
> > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> >
> [Avi Cohen (A)]
> Thank you Mooney, Sean K
> I already tried to connect the namespace with a tap device (see 1 & 2
> below)  - and got the worst performance  for some reason the packet  is
> cut to default MTU inside the  OVS-DPDK which transmit the packet to
> its peer. - although all interfaces MTU were set to 9000.
> 
>  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 type=internal
> 
>  2. ip link set tap1 netns ns1 // attach it to namespace
[Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth 
ports 
If added to ovs will not be accelerated by dpdk unless you use a vdev to attach 
them.
> 
> I'm looking at your link to create a virtual PMD with vdev support - I
> see there a creation of a virtual PMD device , but I'm not sure how
> this is connected to the namespace ?  what device should I assign to
> the namespace ?
[Mooney, Sean K] 
You would use it as follows

ip tuntap add dev tap1 mode tap

ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
options:dpdk-devargs=eth_af_packet0,iface=tap1

ip link set tap1 netns ns1

ip netns exec ns1 ifconfig 192.168.1.1/24 up

in general though if you are using ovs-dpdk you should avoid using network 
namespace and
the kernel where possible but the above should improve you performance. One 
caveat, the amount
of vdev+phyical interfaces is limited by how dpdk is compiled by default to 32 
devices but it can be increased
to 256 if required.

> 
> Best Regards
> avi
> 
> > if you happen to be investigating this for use with openstack routers
> > we Are currently working on a way to remove the use of namespace
> > entirely for dvr when using The default neutron agent and sdn
> > controllers such as ovn already provide that functionality.
> > >
> > > Best Regards
> > > avi
> > > ___
> > > Dpdk-ovs mailing list
> > > dpdk-...@lists.01.org
> > > https://lists.01.org/mailman/listinfo/dpdk-ovs
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

2017-06-15 Thread Avi Cohen (A)


> -Original Message-
> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> Sent: Thursday, 15 June, 2017 11:24 AM
> To: Avi Cohen (A); dpdk-...@lists.01.org; us...@dpdk.org; ovs-
> disc...@openvswitch.org
> Cc: Mooney, Sean K
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -Original Message-
> > From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of Avi
> > Cohen (A)
> > Sent: Thursday, June 15, 2017 8:14 AM
> > To: dpdk-...@lists.01.org; us...@dpdk.org; ovs-discuss@openvswitch.org
> > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> > Hello   All,
> > I have OVS-DPDK connected to a namespace via veth pair device.
> >
> > I've got a very poor performance - compared to normal OVS (i.e. no
> > DPDK).
> > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps , OVS-
> > DPDK 1.7 Gbps.
> >
> > This can be explained as follows:
> > veth is implemented in kernel - in OVS-DPDK data is transferred from
> > veth to user space while in normal OVS we save this transfer
> [Mooney, Sean K] that is part of the reason, the other reson this is slow and 
> The
> main limiter to scalling adding veth pairs or ovs internal port to ovs with 
> dpdk is
> That these linux kernel ports are not processed by the dpdk pmds. They are
> server by the Ovs-vswitchd main thread via a fall back to the non dpdk
> acclarated netdev implementation.
> >
> > Is there any other device to connect to namespace ? something like
> > vhost-user ? I understand that vhost-user cannot be used for namespace
> [Mooney, Sean K] I have been doing some experiments in this regard.
> You should be able to use the tap, pcap or afpacket pmd to add a vedv that 
> will
> improve Performance. I have seen some strange issue with the tap pmd that
> cause packet to be drop By the kernel on tx on some ports but not others so
> there may be issues with that dirver.
> 
> Previous experiment with libpcap seemed to work well with ovs 2.5 but I have
> not tried it With ovs 2.7/master since the introduction of generic vdev 
> support
> at runtime. Previously vdevs And to be allocated using the dpdk args.
> 
> I would try following the af_packet example here
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d6809
> 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> 
[Avi Cohen (A)] 
Thank you Mooney, Sean K
I already tried to connect the namespace with a tap device (see 1 & 2 below)  - 
and got the worst performance 
 for some reason the packet  is cut to default MTU inside the  OVS-DPDK which 
transmit the packet to its peer. - although all interfaces MTU were set to 9000.

 1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 type=internal
 
 2. ip link set tap1 netns ns1 // attach it to namespace

I'm looking at your link to create a virtual PMD with vdev support - I see 
there a creation of a virtual PMD device , but I'm not sure how this is 
connected to the namespace ?  what device should I assign to the namespace ? 

Best Regards
avi

> if you happen to be investigating this for use with openstack routers we Are
> currently working on a way to remove the use of namespace entirely for dvr
> when using The default neutron agent and sdn controllers such as ovn already
> provide that functionality.
> >
> > Best Regards
> > avi
> > ___
> > Dpdk-ovs mailing list
> > dpdk-...@lists.01.org
> > https://lists.01.org/mailman/listinfo/dpdk-ovs
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


  1   2   >