Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-13 Thread Flavio Leitner
On Mon, 11 Nov 2019 14:45:13 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> to follow up on this: I have just upgraded DPDK to 18.11 and OVS to
> 2.11 and I don't see this issue anymore. Also, I don't observe any
> "ring error" messages although the MTU is still at 9216 and OvS only
> has 1Gb of memory. Do you have an idea which change in DPDK/OvS might
> have resolved it?

There are lots and lots of changes in OVS and DPDK between those
versions and 2.11 is considered stable, so I am glad that you could
update and that it works for you.

fbl
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-11 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

to follow up on this: I have just upgraded DPDK to 18.11 and OVS to 2.11 and I 
don't see this issue anymore. Also, I don't observe any "ring error" messages 
although the MTU is still at 9216 and OvS only has 1Gb of memory.
Do you have an idea which change in DPDK/OvS might have resolved it?

Thanks
Tobias

On 06.11.19, 14:44, "Tobias Hofmann (tohofman)"  wrote:

Hi Flavio,

the only error I saw in 'ovs-vsctl show' was related to the dpdk port. The 
other ports all came up fine.

Regarding the "ring error", I'm fine with having it, as long as DPDK is 
able to reserve the minimum amount of memory (which, after restarting OvS 
process is always the case).

Regards
Tobias

On 05.11.19, 21:07, "Flavio Leitner"  wrote:

On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-06 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

the only error I saw in 'ovs-vsctl show' was related to the dpdk port. The 
other ports all came up fine.

Regarding the "ring error", I'm fine with having it, as long as DPDK is able to 
reserve the minimum amount of memory (which, after restarting OvS process is 
always the case).

Regards
Tobias

On 05.11.19, 21:07, "Flavio Leitner"  wrote:

On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Flavio Leitner
On Tue, 5 Nov 2019 18:47:09 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hi Flavio,
> 
> thanks for the insights! Unfortunately, I don't know about the pdump
> and its relation to the ring.

pdump dumps packets from dpdk ports into rings/mempools, so that you
can inspect/use the traffic:
https://doc.dpdk.org/guides/howto/packet_capture_framework.html

But I looked at the dpdk sources now and I don't see it allocating any
memory when the library is initialized, so this is likely a red herring.

> Can you please specify where I can see that the port is not ready
> yet? Is that these three lines:
> 
> 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged
> device (:08:0b.2)

The above shows the device is not ready/bound yet.


> 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching
> device ':08:0b.2' to DPDK
> 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set
> configuration (Invalid argument)
> 
> As far as I know, the ring allocation failure that you mentioned
> isn't necessarily a bad thing since it just indicates that DPDK
> reduces something internally (I can't remember what exactly it was)
> to support a high MTU with only 1GB of memory.

True for the memory allocated for DPDK ports. However, there is a
minimum which if it's not there, the mempool allocation will fail.

> I'm wondering now if it might help to change the timing of when
> openvswitch is started after a system reboot to prevent this problem
> as it only occurs after reboot. Do you think that this approach might
> fix the problem?

It will help to get the i40e port working, but that "ring error"
will continue as you see after restarting anyways.

I don't know the other interface types, maybe there is another
interface failing which is not in the log. Do you see any error
reported in 'ovs-vsctl show' after the restart?

fbl
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

thanks for the insights! Unfortunately, I don't know about the pdump and its 
relation to the ring.

Can you please specify where I can see that the port is not ready yet? Is that 
these three lines:

2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged device 
(:08:0b.2)
2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching device 
':08:0b.2' to DPDK
2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set configuration 
(Invalid argument)

As far as I know, the ring allocation failure that you mentioned isn't 
necessarily a bad thing since it just indicates that DPDK reduces something 
internally (I can't remember what exactly it was) to support a high MTU with 
only 1GB of memory.

I'm wondering now if it might help to change the timing of when openvswitch is 
started after a system reboot to prevent this problem as it only occurs after 
reboot. Do you think that this approach might fix the problem?

Thanks for your help
Tobias

On 05.11.19, 14:08, "Flavio Leitner"  wrote:

On Mon, 4 Nov 2019 19:12:36 +
"Tobias Hofmann (tohofman)"  wrote:

> Hi Flavio,
> 
> thanks for reaching out.
> 
> The DPDK options used in OvS are:
> 
> other_config:pmd-cpu-mask=0x202
> other_config:dpdk-socket-mem=1024
> other_config:dpdk-init=true
> 
> 
> For the dpdk port, we set:
> 
> type=dpdk
> options:dpdk-devargs=:08:0b.2
> external_ids:unused-drv=i40evf 
> mtu_request=9216

Looks good to me, though the CPU has changed comparing to the log:
2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024 -c 0x0001

What I see from the logs is that OvS is trying to add a port, but the
port is not ready yet, so it continues with other things which
also consumes memory. Unfortunately by the time that the i40 port is
ready then there is no memory.

When you restart, the i40 is ready and the memory can be allocated.
However, the ring allocation fails due to lack of memory:

2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory

If you reduce the MTU, then the minimum amount of memory required for
the DPDK port reduces drastically, which explains why it works.

Also increasing the total memory to 2G helps because then the minimum
amount for 9216 MTU and the ring seems to be sufficient.

The ring seems to be related to pdump, is that the case?
I don't known of the top of my head.

In summary, looks like 1G is not enough for large MTU and pdump.
HTH,
fbl

> 
> 
> Please let me know if this is what you asked for.
> 
> Thanks
> Tobias
>   
> On 04.11.19, 15:50, "Flavio Leitner"  wrote:
> 
> 
> It would be nice if you share the DPDK options used in OvS.
> 
> On Sat, 2 Nov 2019 15:43:18 +
> "Tobias Hofmann \(tohofman\) via discuss"
>  wrote:
> 
> > Hello community,
> > 
> > My team and I observe a strange behavior on our system with the
> > creation of dpdk ports in OVS. We have a CentOS 7 system with
> > OpenvSwitch and only one single port of type ‘dpdk’ attached to
> > a bridge. The MTU size of the DPDK port is 9216 and the reserved
> > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
> > HugePage memory.
> > 
> > Setting everything up works fine, however after I reboot my
> > box, the dpdk port is in error state and I can observe this
> > line in the logs (full logs attached to the mail):
> > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
> > Invalid argument
> > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> > interface dpdk-p0 new configuration
> > 
> > I figured out that by restarting the openvswitch process, the
> > issue with the port is resolved and it is back in a working
> > state. However, as soon as I reboot the system a second time,
> > the port comes up in error state again. Now, we have also
> > observed a couple of other workarounds that I can’t really
> > explain why they help:
> > 
> >   *   When there is also a VM deployed on the system that is
> > using ports of type ‘dpdkvhostuserclient’, we never see any
> > issues like that. (MTU size of the VM ports is 9216 by the way)
> >   *   When we increase the HugePage memory for OVS to 2GB, we
> > also don’t see any issues.
> >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> > helps to prevent this issue.
> > 
> > Can anyone explain 

Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-05 Thread Flavio Leitner
On Mon, 4 Nov 2019 19:12:36 +
"Tobias Hofmann (tohofman)"  wrote:

> Hi Flavio,
> 
> thanks for reaching out.
> 
> The DPDK options used in OvS are:
> 
> other_config:pmd-cpu-mask=0x202
> other_config:dpdk-socket-mem=1024
> other_config:dpdk-init=true
> 
> 
> For the dpdk port, we set:
> 
> type=dpdk
> options:dpdk-devargs=:08:0b.2
> external_ids:unused-drv=i40evf 
> mtu_request=9216

Looks good to me, though the CPU has changed comparing to the log:
2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024 -c 0x0001

What I see from the logs is that OvS is trying to add a port, but the
port is not ready yet, so it continues with other things which
also consumes memory. Unfortunately by the time that the i40 port is
ready then there is no memory.

When you restart, the i40 is ready and the memory can be allocated.
However, the ring allocation fails due to lack of memory:

2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory

If you reduce the MTU, then the minimum amount of memory required for
the DPDK port reduces drastically, which explains why it works.

Also increasing the total memory to 2G helps because then the minimum
amount for 9216 MTU and the ring seems to be sufficient.

The ring seems to be related to pdump, is that the case?
I don't known of the top of my head.

In summary, looks like 1G is not enough for large MTU and pdump.
HTH,
fbl

> 
> 
> Please let me know if this is what you asked for.
> 
> Thanks
> Tobias
>   
> On 04.11.19, 15:50, "Flavio Leitner"  wrote:
> 
> 
> It would be nice if you share the DPDK options used in OvS.
> 
> On Sat, 2 Nov 2019 15:43:18 +
> "Tobias Hofmann \(tohofman\) via discuss"
>  wrote:
> 
> > Hello community,
> > 
> > My team and I observe a strange behavior on our system with the
> > creation of dpdk ports in OVS. We have a CentOS 7 system with
> > OpenvSwitch and only one single port of type ‘dpdk’ attached to
> > a bridge. The MTU size of the DPDK port is 9216 and the reserved
> > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
> > HugePage memory.
> > 
> > Setting everything up works fine, however after I reboot my
> > box, the dpdk port is in error state and I can observe this
> > line in the logs (full logs attached to the mail):
> > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
> > Invalid argument
> > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> > interface dpdk-p0 new configuration
> > 
> > I figured out that by restarting the openvswitch process, the
> > issue with the port is resolved and it is back in a working
> > state. However, as soon as I reboot the system a second time,
> > the port comes up in error state again. Now, we have also
> > observed a couple of other workarounds that I can’t really
> > explain why they help:
> > 
> >   *   When there is also a VM deployed on the system that is
> > using ports of type ‘dpdkvhostuserclient’, we never see any
> > issues like that. (MTU size of the VM ports is 9216 by the way)
> >   *   When we increase the HugePage memory for OVS to 2GB, we
> > also don’t see any issues.
> >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> > helps to prevent this issue.
> > 
> > Can anyone explain this?
> > 
> > We’re using the following versions:
> > Openvswitch: 2.9.3
> > DPDK: 17.11.5
> > 
> > Appreciate any help!
> > Tobias  
> 
> 
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-04 Thread Tobias Hofmann (tohofman) via discuss
Hi Flavio,

thanks for reaching out.

The DPDK options used in OvS are:

other_config:pmd-cpu-mask=0x202
other_config:dpdk-socket-mem=1024
other_config:dpdk-init=true


For the dpdk port, we set:

type=dpdk
options:dpdk-devargs=:08:0b.2
external_ids:unused-drv=i40evf 
mtu_request=9216


Please let me know if this is what you asked for.

Thanks
Tobias

On 04.11.19, 15:50, "Flavio Leitner"  wrote:


It would be nice if you share the DPDK options used in OvS.

On Sat, 2 Nov 2019 15:43:18 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hello community,
> 
> My team and I observe a strange behavior on our system with the
> creation of dpdk ports in OVS. We have a CentOS 7 system with
> OpenvSwitch and only one single port of type ‘dpdk’ attached to a
> bridge. The MTU size of the DPDK port is 9216 and the reserved
> HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total HugePage
> memory.
> 
> Setting everything up works fine, however after I reboot my box, the
> dpdk port is in error state and I can observe this line in the logs
> (full logs attached to the mail):
> 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: Invalid
> argument 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> interface dpdk-p0 new configuration
> 
> I figured out that by restarting the openvswitch process, the issue
> with the port is resolved and it is back in a working state. However,
> as soon as I reboot the system a second time, the port comes up in
> error state again. Now, we have also observed a couple of other
> workarounds that I can’t really explain why they help:
> 
>   *   When there is also a VM deployed on the system that is using
> ports of type ‘dpdkvhostuserclient’, we never see any issues like
> that. (MTU size of the VM ports is 9216 by the way)
>   *   When we increase the HugePage memory for OVS to 2GB, we also
> don’t see any issues.
>   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> helps to prevent this issue.
> 
> Can anyone explain this?
> 
> We’re using the following versions:
> Openvswitch: 2.9.3
> DPDK: 17.11.5
> 
> Appreciate any help!
> Tobias



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

2019-11-04 Thread Flavio Leitner

It would be nice if you share the DPDK options used in OvS.

On Sat, 2 Nov 2019 15:43:18 +
"Tobias Hofmann \(tohofman\) via discuss" 
wrote:

> Hello community,
> 
> My team and I observe a strange behavior on our system with the
> creation of dpdk ports in OVS. We have a CentOS 7 system with
> OpenvSwitch and only one single port of type ‘dpdk’ attached to a
> bridge. The MTU size of the DPDK port is 9216 and the reserved
> HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total HugePage
> memory.
> 
> Setting everything up works fine, however after I reboot my box, the
> dpdk port is in error state and I can observe this line in the logs
> (full logs attached to the mail):
> 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
> memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: Invalid
> argument 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
> interface dpdk-p0 new configuration
> 
> I figured out that by restarting the openvswitch process, the issue
> with the port is resolved and it is back in a working state. However,
> as soon as I reboot the system a second time, the port comes up in
> error state again. Now, we have also observed a couple of other
> workarounds that I can’t really explain why they help:
> 
>   *   When there is also a VM deployed on the system that is using
> ports of type ‘dpdkvhostuserclient’, we never see any issues like
> that. (MTU size of the VM ports is 9216 by the way)
>   *   When we increase the HugePage memory for OVS to 2GB, we also
> don’t see any issues.
>   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
> helps to prevent this issue.
> 
> Can anyone explain this?
> 
> We’re using the following versions:
> Openvswitch: 2.9.3
> DPDK: 17.11.5
> 
> Appreciate any help!
> Tobias

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss