Re: [ovs-discuss] Open vSwitch fails to allocate memory pool for DPDK port

2019-05-20 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi Ian,

Just to close on this.
Using openvswitch with version 2.9.3 and DPDK 17.11.5 fixed the issue.

Thanks for your help!

On 4/16/19, 12:46 PM, "Ian Stokes"  wrote:

On 4/3/2019 5:15 PM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) wrote:
> Sorry I just accidentally sent out the last mail too early...
> 
> Hi Ian,
>  
> To answer your questions, I just reproduced the issue on my system:
>  
> What commands have you used to configure the hugepage memory on your 
system?
> - I have added the following kernel parameters: hugepagesz=2M 
hugepages=512 and then rebooted the system. In other scenarios I allocated the 
HugePages by writing into 
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages. The HugePages are also 
mounted on: hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>  

Hi Tobias,

I was able to reproduce the issue today.

The error that occurs actually happens inside DPDK so as such it's not a 
bug in OVS.

I was able to reproduce the issue with DPDK 17.11.0, 17.11.1 and 
17.11.2. The issue itself seems to be resolved on my system by the 
following commit in DPDK 17.11.3:

222f91da4714 ("mem: do not use physical addresses in IOVA as VA mode")

Since you are using OVS 2.9.3 I would suggest moving to use DPDK 17.11.4 
at least.

The mapping for OVS to DPDK versions is available in the OVS release notes:

http://docs.openvswitch.org/en/latest/faq/releases/

Can you give this a try and see if it resolves the issue for you?

HTH
Ian

> Before you start OVS with DPDK, if you execute cat /proc/meminfo how many 
hugepages do you see available and how many free? (for 2mb I would assume 512 
in both cases).
>- In this specific case, I only saw 512 HugePages_Total and 0 free 
because after the restart OVS was already using the 512 pages.
>   
> What memory commands are you passing to OVS with DPDK (e.g. 
dpdk-socket-mem parameter etc.)?
> - Nothing, meaning the default memory of 1GB.
> 
> Is it just 1 bridge and a single DPDK interface you are adding or are 
there more than 1 DPDK interface attached?
> - There are 4 bridges in total but only one is using netdev datapath 
and DPDK ports. Here is an extract of that one DPDK bridge:
> Bridge lan-br
>  Port lan-br
>  Interface lan-br
>  type: internal
>  Port "dpdk-p0"
>  Interface "dpdk-p0"
>  type: dpdk
>  options: {dpdk-devargs=":08:0b.2"}
>  error: "could not add network device dpdk-p0 to ofproto 
(No such device)"
> 
> Can you provide the entire log? I'd be interested in seeing the memory  
info at initialization of OVS DPDK.
>   - I attached it to that mail.
> 
> What type of DPDK device are you adding? It seems to be a Virtual 
function from the log above, can you provide more detail as regards the 
underlying NIC type the VF is associated with?
> -  It's a VF. NIC type is Intel 710 and driver is i40
> 
> DPDK Version is 17.11.0
> 
> Thanks
> Tobias
> 
> On 4/3/19, 9:07 AM, "Tobias Hofmann -T (tohofman - AAP3 INC at Cisco)" 
 wrote:
> 
>  Hi Ian,
>  
>  To answer your questions, I just reproduced the issue on my system:
>  
>  What commands have you used to configure the hugepage memory on your 
system?
>  - I have added the following kernel parameters: hugepagesz=2M 
hugepages=512 and then rebooted the system. In other scenarios I allocated the 
HugePages by writing into 
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages.
>The HugePages are also mounted on: hugetlbfs on /dev/hugepages 
type hugetlbfs (rw,relatime,seclabel)
>  
>  Before you start OVS with DPDK, if you execute cat /proc/meminfo how 
many hugepages do you see available and how many free? (for 2mb I would assume 
512 in both cases).
>  - In this specific case, I only saw 512 HugePages_Total and 0 free 
because after the restart OVS was already using the 512 pages.
    >      
>  What memory commands are you passing to OVS with DPDK (e.g. 
dpdk-socket-mem parameter etc.)?
>  
>  On 4/3/19, 6:00 AM, "Ian Stokes"  wrote:
>  
>  On 4/3/2019 1:04 AM, Tobias Hofmann -T (tohofman - AAP3 INC at 
Cisco)
>  via discuss wrote:
>  > Hello,
>  >
>  > I’m trying to attach a DPDK port with an mtu_s

Re: [ovs-discuss] OVS-DPDK fails after clearing buffer

2019-04-05 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi Anatoly,

I just wanted to follow up on the issue reported below. (It's already been 2 
weeks ago)

I don’t really understand the first solution you suggested: use IOVA as VA mode
Does that mean I shall load vfio-pci driver before I set dpdk-init to true? So, 
doing a 'modprobe vfio-pci'? Actually I use vfio-pci but I wait with loading 
the vfio-pci until I actually bind an interface to it.

Also, to answer your last question: Transparent HugePages are enabled. I've 
just disabled them and was still able to reproduce the issue.

Regards
Toby


On 3/21/19, 12:19 PM, "Ian Stokes"  wrote:

    On 3/20/2019 10:37 PM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) 
via discuss wrote:
> Hello,
> 

Hi,

I wasnt sure at first glance what was happening so discussed with 
Anatoly (Cc'd) who has worked a considerable amount with DPDK memory 
models. Please see response below to what the suspected issue is. 
Anatoly, thanks for you input on this.

> I want to use Open vSwitch with DPDK enabled. For this purpose, I first 
> allocate 512 HugePages of size 2MB to have a total of 1GB of HugePage 
> memory available for OVS-DPDK. (I don’t set any value for 
> */dpdk-socket-mem/ *so the default value of 1GB is taken). Then I set 
> */dpdk-init=true/*. This normally works fine.
> 
> However, I have realized that I can’t allocate HugePages from memory 
> that is inside the buff/cache (visible through */free -h/*). To solve 
> this issue, I decided to clear the cache/buffer in Linux before 
> allocating HugePages by running */echo 1 > /proc/sys/vm/drop_caches/*.
> 
> After that, allocation of the HugePages still works fine. However, when 
> I then run */ovs-vsctl set open_vswitch other_config:dpdk-init=true/* 
> the process crashes and inside the ovs-vswitchd.log I observe the 
following:
> 
> *ovs-vswitchd log output:*
> 
> 2019-03-18T13:32:41.112Z|00015|dpdk|ERR|EAL: Can only reserve 270 pages 
> from 512 requested
> 
> Current CONFIG_RTE_MAX_MEMSEG=256 is not enough

After you drop the cache, from the above log it is clear that, as a 
result, hugepages’ physical addresses get fragmented, as DPDK cannot 
concatenate pages into segments any more (which results in 
1-page-per-segment type situation which causes you to run out of memseg 
structures, of which there are only 256). We have no control over what 
addresses we get from the OS, so there’s really no way to “unfragment” 
the pages.

So, the above only happens when

1) you’re running in IOVA as PA mode (so, using real physical addresses).
2) your hugepages are heavily fragmented.

Possible solutions for this are:

1. Use IOVA as VA mode (so, use VFIO, not igb_uio), this way, the pages 
will still be fragmented, but the IOMMU will remap them to be contiguous 
– this is the recommended option, with VFIO being available it is the 
better choice than igb_uio.

2. Use bigger page sizes. Strictly speaking, this isn’t a solution as 
memory would be fragmented too, but a 1GB-long standalone segment is way 
more useful than a standalone 2MB-long segment.

3. Reboot (as you have done), maybe try re-reserving all pages? E.g.
i. Clean your hugetlbfs contents to free any leftover pages
ii. echo 0 > /sys/kernel/mm/hugepages/hugepage-/nr_hugepages
iii. echo 512 > /sys/kernel/mm/hugepages/hugepage-/nr_hugepages

Alternatively if you upgrade to OVs 2.11 it will use DPDK 18.11. This 
would make a difference as since DPDK 18.05+ we don’t require 
PA-contiguous segments any more

I would also question why these pages are in the regular page cache in 
the first place. Are transparent hugepages enabled?

HTL
Ian

> 
> Please either increase it or request less amount of memory.
> 
> 2019-03-18T13:32:41.112Z|00016|dpdk|ERR|EAL: Cannot init memory
> 
> 2019-03-18T13:32:41.128Z|2|daemon_unix|ERR|fork child died before 
> signaling startup (killed (Aborted))
> 
> 2019-03-18T13:32:41.128Z|3|daemon_unix|EMER|could not detach from 
> foreground session
> 
> *Tech Details:*
> 
>   * Open vSwitch version: 2.9.2
>   * DPDK version: 17.11
>   * System has only a single NUMA node.
> 
> This problem is consistently reproducible when having a relatively high 
> amount of memory in the buffer/cache (usually around 5GB) and clearing 
> the buffer afterwards with the command outlined above.
> 
> On the Internet, I found some posts saying that this is due to memory 
> fragmentation but normally I’m not even able to allocate HugePages in 
  

Re: [ovs-discuss] Open vSwitch fails to allocate memory pool for DPDK port

2019-04-03 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Sorry I just accidentally sent out the last mail too early...

Hi Ian,

To answer your questions, I just reproduced the issue on my system:

What commands have you used to configure the hugepage memory on your system?
   - I have added the following kernel parameters: hugepagesz=2M hugepages=512 
and then rebooted the system. In other scenarios I allocated the HugePages by 
writing into /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages. The 
HugePages are also mounted on: hugetlbfs on /dev/hugepages type hugetlbfs 
(rw,relatime,seclabel)

Before you start OVS with DPDK, if you execute cat /proc/meminfo how many 
hugepages do you see available and how many free? (for 2mb I would assume 512 
in both cases).
  - In this specific case, I only saw 512 HugePages_Total and 0 free because 
after the restart OVS was already using the 512 pages.
 
What memory commands are you passing to OVS with DPDK (e.g. dpdk-socket-mem 
parameter etc.)?
   - Nothing, meaning the default memory of 1GB.

Is it just 1 bridge and a single DPDK interface you are adding or are there 
more than 1 DPDK interface attached?
   - There are 4 bridges in total but only one is using netdev datapath and 
DPDK ports. Here is an extract of that one DPDK bridge:
   Bridge lan-br
Port lan-br
Interface lan-br
type: internal
Port "dpdk-p0"
Interface "dpdk-p0"
type: dpdk
options: {dpdk-devargs=":08:0b.2"}
error: "could not add network device dpdk-p0 to ofproto (No 
such device)"

Can you provide the entire log? I'd be interested in seeing the memory  info at 
initialization of OVS DPDK.
 - I attached it to that mail.

What type of DPDK device are you adding? It seems to be a Virtual function from 
the log above, can you provide more detail as regards the underlying NIC type 
the VF is associated with?
-  It's a VF. NIC type is Intel 710 and driver is i40

DPDK Version is 17.11.0

Thanks
Tobias

On 4/3/19, 9:07 AM, "Tobias Hofmann -T (tohofman - AAP3 INC at Cisco)" 
 wrote:

Hi Ian,

To answer your questions, I just reproduced the issue on my system:

What commands have you used to configure the hugepage memory on your system?
- I have added the following kernel parameters: hugepagesz=2M hugepages=512 
and then rebooted the system. In other scenarios I allocated the HugePages by 
writing into /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages.
  The HugePages are also mounted on: hugetlbfs on /dev/hugepages type 
hugetlbfs (rw,relatime,seclabel)

Before you start OVS with DPDK, if you execute cat /proc/meminfo how many 
hugepages do you see available and how many free? (for 2mb I would assume 512 
in both cases).
- In this specific case, I only saw 512 HugePages_Total and 0 free because 
after the restart OVS was already using the 512 pages.

What memory commands are you passing to OVS with DPDK (e.g. dpdk-socket-mem 
parameter etc.)?

On 4/3/19, 6:00 AM, "Ian Stokes"  wrote:
    
    On 4/3/2019 1:04 AM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) 
via discuss wrote:
> Hello,
> 
> I’m trying to attach a DPDK port with an mtu_size of 9216 to a 
bridge. 
> For this purpose, I have allocated 512 HugePages of size 2MB for OVS 
> (1GB in total).

Hi,

I couldn't reproduce the behavior above on my own system with 512 x 2MB 
hugepages. Ports were successfully configured with MTU 9216. Perhaps 
some more detail as regards your setup will help reproduce/root cause.

Questions inline below.

What commands have you used to configure the hugepage memory on your 
system?

Before you start OVS with DPDK, if you execute cat /proc/meminfo how 
many hugepages do you see available and how many free? (for 2mb I would 
assume 512 in both cases).

What memory commands are you passing to OVS with DPDK (e.g. 
dpdk-socket-mem parameter etc.)?

Is it just 1 bridge and a single DPDK interface you are adding or are 
there more than 1 DPDK interface attached?

> 
> Doing so will constantly fail, two workarounds to get it working were 
> either to decrease the MTU size to 1500 or to increase the total 
amount 
> of HugePage memory to 3GB.
> 
> Actually, I did expect the setup to also work with just 1GB because 
if 
> the amount of memory is not sufficient, OVS will try to halve the 
number 
> of buffers until 16K.
> 
> However, inside the logs I couldn’t find any details regarding this. 
The 
> only error message I observed was:
> 
   

Re: [ovs-discuss] Open vSwitch fails to allocate memory pool for DPDK port

2019-04-03 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi Ian,

To answer your questions, I just reproduced the issue on my system:

What commands have you used to configure the hugepage memory on your system?
- I have added the following kernel parameters: hugepagesz=2M hugepages=512 and 
then rebooted the system. In other scenarios I allocated the HugePages by 
writing into /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages.
  The HugePages are also mounted on: hugetlbfs on /dev/hugepages type hugetlbfs 
(rw,relatime,seclabel)

Before you start OVS with DPDK, if you execute cat /proc/meminfo how many 
hugepages do you see available and how many free? (for 2mb I would assume 512 
in both cases).
- In this specific case, I only saw 512 HugePages_Total and 0 free because 
after the restart OVS was already using the 512 pages.

What memory commands are you passing to OVS with DPDK (e.g. dpdk-socket-mem 
parameter etc.)?

On 4/3/19, 6:00 AM, "Ian Stokes"  wrote:

On 4/3/2019 1:04 AM, Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) 
via discuss wrote:
> Hello,
> 
> I’m trying to attach a DPDK port with an mtu_size of 9216 to a bridge. 
> For this purpose, I have allocated 512 HugePages of size 2MB for OVS 
> (1GB in total).

Hi,

I couldn't reproduce the behavior above on my own system with 512 x 2MB 
hugepages. Ports were successfully configured with MTU 9216. Perhaps 
some more detail as regards your setup will help reproduce/root cause.

Questions inline below.

What commands have you used to configure the hugepage memory on your system?

Before you start OVS with DPDK, if you execute cat /proc/meminfo how 
many hugepages do you see available and how many free? (for 2mb I would 
assume 512 in both cases).

What memory commands are you passing to OVS with DPDK (e.g. 
dpdk-socket-mem parameter etc.)?

Is it just 1 bridge and a single DPDK interface you are adding or are 
there more than 1 DPDK interface attached?

> 
> Doing so will constantly fail, two workarounds to get it working were 
> either to decrease the MTU size to 1500 or to increase the total amount 
> of HugePage memory to 3GB.
> 
> Actually, I did expect the setup to also work with just 1GB because if 
> the amount of memory is not sufficient, OVS will try to halve the number 
> of buffers until 16K.
> 
> However, inside the logs I couldn’t find any details regarding this. The 
> only error message I observed was:
> 
> netdev_dpdk|ERR|Failed to create memory pool for netdev dpdk-p0, with 
> MTU 9216 on socket 0: Invalid argument

Can you provide the entire log? I'd be interested in seeing the memory 
info at initialization of OVS DPDK.

> 
> That log message is weird as I would have expected an error message 
> saying something like ‘could not reserve memory’ but not ‘Invalid 
argument’.
> 
> I then found this very similar bug on Openstack: 
> https://bugs.launchpad.net/starlingx/+bug/1796380
> 
> After having read this, I tried the exact same setup as described above 
> but this time with HugePages of size 1GB instead of 2MB. In this 
> scenario, it also worked with just 1GB of memory reserved for OVS.
> 
> Inside the logs I could observe this time:
> 
> 2019-04-02T22:55:31.849Z|00098|dpdk|ERR|RING: Cannot reserve memory
> 
> 2019-04-02T22:55:32.019Z|00099|dpdk|ERR|RING: Cannot reserve memory
> 
> 2019-04-02T22:55:32.200Z|00100|netdev_dpdk|INFO|Virtual function 
> detected, HW_CRRC_STRIP will be enabled
> 

What type of DPDK device are you adding? It seems to be a Virtual 
function from the log above, can you provide more detail as regards the 
underlying NIC type the VF is associated with?

> 2019-04-02T22:55:32.281Z|00101|netdev_dpdk|INFO|Port 0: f6:e9:29:4d:f9:cf
> 
> 2019-04-02T22:55:32.281Z|00102|dpif_netdev|INFO|Core 1 on numa node 0 
> assigned port 'dpdk-p0' rx queue 0 (measured processing cycles 0).
> 
> The two times where OVS cannot reserve memory are, I guess, the two 
> times where it has to halve the number of buffers to get it working.

Yes this is correct. For example in my setup with 512 x 2MB pages I see 
"Cannot reserve memory" message 4 times before it completes configuration.

> 
> My question now is, is the fact that it does not work for 2MB HugePages 
> a bug? Also, is the error message in the first log extract the intended 
one?
> 

Yes, it seems like a bug if it can be reproduced. The invalid argument 
in this case would refer to the number of mbufs being requested by OVS 
DPDK being less than the minimum allowed (4096 * 64).

[ovs-discuss] Open vSwitch fails to allocate memory pool for DPDK port

2019-04-02 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hello,

I’m trying to attach a DPDK port with an mtu_size of 9216 to a bridge. For this 
purpose, I have allocated 512 HugePages of size 2MB for OVS (1GB in total).
Doing so will constantly fail, two workarounds to get it working were either to 
decrease the MTU size to 1500 or to increase the total amount of HugePage 
memory to 3GB.

Actually, I did expect the setup to also work with just 1GB because if the 
amount of memory is not sufficient, OVS will try to halve the number of buffers 
until 16K.
However, inside the logs I couldn’t find any details regarding this. The only 
error message I observed was:
netdev_dpdk|ERR|Failed to create memory pool for netdev dpdk-p0, with MTU 9216 
on socket 0: Invalid argument

That log message is weird as I would have expected an error message saying 
something like ‘could not reserve memory’ but not ‘Invalid argument’.
I then found this very similar bug on Openstack: 
https://bugs.launchpad.net/starlingx/+bug/1796380

After having read this, I tried the exact same setup as described above but 
this time with HugePages of size 1GB instead of 2MB. In this scenario, it also 
worked with just 1GB of memory reserved for OVS.
Inside the logs I could observe this time:
2019-04-02T22:55:31.849Z|00098|dpdk|ERR|RING: Cannot reserve memory
2019-04-02T22:55:32.019Z|00099|dpdk|ERR|RING: Cannot reserve memory
2019-04-02T22:55:32.200Z|00100|netdev_dpdk|INFO|Virtual function detected, 
HW_CRRC_STRIP will be enabled
2019-04-02T22:55:32.281Z|00101|netdev_dpdk|INFO|Port 0: f6:e9:29:4d:f9:cf
2019-04-02T22:55:32.281Z|00102|dpif_netdev|INFO|Core 1 on numa node 0 assigned 
port 'dpdk-p0' rx queue 0 (measured processing cycles 0).

The two times where OVS cannot reserve memory are, I guess, the two times where 
it has to halve the number of buffers to get it working.
My question now is, is the fact that it does not work for 2MB HugePages a bug? 
Also, is the error message in the first log extract the intended one?

My version numbers:

  *   CentOS 7.6
  *   Open vSwitch version: 2.9.3
  *   DPDK version: 17.11
  *   System has a single NUMA node.

Thank you
Tobias
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS-DPDK fails after clearing buffer

2019-03-20 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hello,

I want to use Open vSwitch with DPDK enabled. For this purpose, I first 
allocate 512 HugePages of size 2MB to have a total of 1GB of HugePage memory 
available for OVS-DPDK. (I don’t set any value for dpdk-socket-mem so the 
default value of 1GB is taken). Then I set dpdk-init=true. This normally works 
fine.

However, I have realized that I can’t allocate HugePages from memory that is 
inside the buff/cache (visible through free -h). To solve this issue, I decided 
to clear the cache/buffer in Linux before allocating HugePages by running echo 
1 > /proc/sys/vm/drop_caches.
After that, allocation of the HugePages still works fine. However, when I then 
run ovs-vsctl set open_vswitch other_config:dpdk-init=true the process crashes 
and inside the ovs-vswitchd.log I observe the following:

ovs-vswitchd log output:
2019-03-18T13:32:41.112Z|00015|dpdk|ERR|EAL: Can only reserve 270 pages from 
512 requested
Current CONFIG_RTE_MAX_MEMSEG=256 is not enough
Please either increase it or request less amount of memory.
2019-03-18T13:32:41.112Z|00016|dpdk|ERR|EAL: Cannot init memory
2019-03-18T13:32:41.128Z|2|daemon_unix|ERR|fork child died before signaling 
startup (killed (Aborted))
2019-03-18T13:32:41.128Z|3|daemon_unix|EMER|could not detach from 
foreground session

Tech Details:

  *   Open vSwitch version: 2.9.2
  *   DPDK version: 17.11
  *   System has only a single NUMA node.

This problem is consistently reproducible when having a relatively high amount 
of memory in the buffer/cache (usually around 5GB) and clearing the buffer 
afterwards with the command outlined above.
On the Internet, I found some posts saying that this is due to memory 
fragmentation but normally I’m not even able to allocate HugePages in the first 
place when my memory is already fragmented. In this scenario however the 
allocation of HugePages works totally fine after clearing the buffer so why 
would they be fragmented?

A workaround that I know of is a reboot.

I’d be very grateful about any opinion on that.

Thank you
Tobias
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Packet capturing on vHost User Ports

2018-12-20 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hello,

I am trying to capture packets with tcpdump which unfortunately does not work 
with vhost user ports. It fails saying that there is “no such device” which is 
probably due to the fact that the vhost user ports are not recognized as proper 
interfaces by Linux.

Has anyone already thought about packet capturing on vhost user ports and has a 
solution?

Thanks for your help!
Tobias
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Default datapath_type of Bridges

2018-12-10 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi Ben,

Thanks for the quick reply.

Command that I'm using: ovs-vsctl remove bridge test-br datapath_type netdev
Output message:ovs-vsctl: "remove" operation would put 0 values in column 
datapath_type of table Bridge but the minimum number is 1

I still think that the empty string is misleading because it suggests the user 
that empty string is a valid entry but apparently you cannot set it back to be 
empty once changed.

Thanks
Toby

On 12/10/18, 3:05 PM, "Ben Pfaff"  wrote:

On Mon, Dec 10, 2018 at 10:01:48PM +, Tobias Hofmann -T (tohofman - 
AAP3 INC at Cisco) via discuss wrote:
> Hi,
> 
> I noticed that the property datapath_type of a freshly created bridge is 
by default empty. In order to support DPDK, I have to set the datapath_type to 
netdev.
> 
> Now when I want to disable DPDK, I try to remove the datapath_type
> again which fails saying that datapath_type cannot be empty.

Can you show us the command you ran and the error message?

   ovs-vsctl: "remove" operation would put 0 values in column 
datapath_type of table Bridge but the minimum number is 1

> This is weird since the datapath_type is initally empty after creating
> the bridge from scratch.
> 
> I saw that there is also the option to set the datapath_type to system 
which is apparently the default setting.
> So my question is whether the type system is technically the same as no 
entry for the datapath_type at all.
> If so, wouldn’t it make sense to set the datapath_type as system after 
bridge creation instead of leaving it empty.

The empty string and "system" are synonyms, so there's no reason to
prefer one or the other.


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Default datapath_type of Bridges

2018-12-10 Thread Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) via discuss
Hi,

I noticed that the property datapath_type of a freshly created bridge is by 
default empty. In order to support DPDK, I have to set the datapath_type to 
netdev.

Now when I want to disable DPDK, I try to remove the datapath_type again which 
fails saying that datapath_type cannot be empty. This is weird since the 
datapath_type is initally empty after creating the bridge from scratch.

I saw that there is also the option to set the datapath_type to system which is 
apparently the default setting.
So my question is whether the type system is technically the same as no entry 
for the datapath_type at all.
If so, wouldn’t it make sense to set the datapath_type as system after bridge 
creation instead of leaving it empty.

Thanks for your help,
Toby
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss