Re: [ovs-discuss] Inactivity Probe configuration not taking effect in OVS 2.14.0

2021-06-28 Thread Ben Pfaff
I recommend trying the patches that I posted:
https://mail.openvswitch.org/pipermail/ovs-dev/2021-June/383783.html
https://mail.openvswitch.org/pipermail/ovs-dev/2021-June/383784.html

On Tue, Jun 15, 2021 at 07:24:06AM +, Saurabh Deokate wrote:
> Hi Ben,
> 
> Here is the output for ovs-vsctl list controller 
> 
> [root@172-31-64-26-aws-eu-central-1c ~]# ovs-vsctl list controller
> _uuid : eb56176a-ad32-4eb0-9cd8-7ab3bd448a68
> connection_mode : out-of-band
> controller_burst_limit: []
> controller_queue_size: []
> controller_rate_limit: []
> enable_async_messages: []
> external_ids : {}
> inactivity_probe : 0
> is_connected : true
> local_gateway : []
> local_ip : []
> local_netmask : []
> max_backoff : []
> other_config : {}
> role : other
> status : {last_error="Connection refused", sec_since_connect="42606", 
> sec_since_disconnect="42614", state=ACTIVE}
> target : "tcp:127.0.0.1:6653"
> type : []
> 
> Let me know if you need any other details.
> 
> ~Saurabh.
> 
> On 11/06/21, 4:03 AM, "Ben Pfaff"  wrote:
> 
> On Mon, Jun 07, 2021 at 02:51:58PM +, Saurabh Deokate wrote:
> > Hi Team,
> > 
> > We are seeing an issue in OVS 2.14.0 after moving from 2.8.0. We first 
> set the controller on the bridge and then set inactivity probe for our 
> controller to 0 to disable new connection attempts by ovs. After this we 
> start our controller to serve request. But in the new version of OVS somehow 
> we still see inactivity probe kicking in after every 5s and trying to 
> reconnect. This issue is triggered when we are in the middle of handling a 
> packet in our controller (i.e. OFController) which is blocked for almost 40s.
> > 
> > 
> > Kernel version: CentOS Linux release 7.9.2009
> > Output of ovs-vsctl list controller command shows inactivity_probe: 0
> > 
> > Below is the snippet from ovs-vswitchd.log
> > 
> > 021-05-11T22:32:55.378Z|00608|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
> > 
> 2021-05-11T22:33:05.382Z|00609|connmgr|INFO|br0.uvms<->tcp:127.0.0.1:6653: 44 
> flow_mods 10 s ago (44 adds)
> > 2021-05-11T22:33:05.386Z|00610|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
> > 
> 2021-05-11T22:33:06.406Z|00611|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
> > 
> 2021-05-11T22:33:06.438Z|00612|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
> > 2021-05-11T22:33:16.438Z|00613|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
> > 
> 2021-05-11T22:33:17.921Z|00614|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
> > 
> 2021-05-11T22:33:18.108Z|00615|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
> > 2021-05-11T22:33:28.110Z|00616|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
> > 
> 2021-05-11T22:33:29.433Z|00617|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
> > 
> 2021-05-11T22:33:29.933Z|00618|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
> > 
> > 
> > Can you please help us find out what could be wrong with this 
> configuration and what is the expected behaviour from ovs switch when the 
> receiver on the controller is blocked for long.
> 
> Hmm, I can't reproduce this with current OVS.  I do see a problem with
> the fail-open implementation; I'll see a patch for that.
> 
> Can you show the output of "ovs-vsctl list controller"?
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Inactivity Probe configuration not taking effect in OVS 2.14.0

2021-06-28 Thread Saurabh Deokate
Hi Ben, 

Any update on this ? Do you need any other details ?

~Saurabh.

On 15/06/21, 12:54 PM, "Saurabh Deokate"  wrote:

Hi Ben,

Here is the output for ovs-vsctl list controller 

[root@172-31-64-26-aws-eu-central-1c ~]# ovs-vsctl list controller
_uuid : eb56176a-ad32-4eb0-9cd8-7ab3bd448a68
connection_mode : out-of-band
controller_burst_limit: []
controller_queue_size: []
controller_rate_limit: []
enable_async_messages: []
external_ids : {}
inactivity_probe : 0
is_connected : true
local_gateway : []
local_ip : []
local_netmask : []
max_backoff : []
other_config : {}
role : other
status : {last_error="Connection refused", sec_since_connect="42606", 
sec_since_disconnect="42614", state=ACTIVE}
target : "tcp:127.0.0.1:6653"
type : []

Let me know if you need any other details.

~Saurabh.

On 11/06/21, 4:03 AM, "Ben Pfaff"  wrote:

On Mon, Jun 07, 2021 at 02:51:58PM +, Saurabh Deokate wrote:
> Hi Team,
> 
> We are seeing an issue in OVS 2.14.0 after moving from 2.8.0. We 
first set the controller on the bridge and then set inactivity probe for our 
controller to 0 to disable new connection attempts by ovs. After this we start 
our controller to serve request. But in the new version of OVS somehow we still 
see inactivity probe kicking in after every 5s and trying to reconnect. This 
issue is triggered when we are in the middle of handling a packet in our 
controller (i.e. OFController) which is blocked for almost 40s.
> 
> 
> Kernel version: CentOS Linux release 7.9.2009
> Output of ovs-vsctl list controller command shows inactivity_probe: 0
> 
> Below is the snippet from ovs-vswitchd.log
> 
> 
021-05-11T22:32:55.378Z|00608|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connected
> 
2021-05-11T22:33:05.382Z|00609|connmgr|INFO|br0.uvms<->tcp:127.0.0.1:6653: 44 
flow_mods 10 s ago (44 adds)
> 
2021-05-11T22:33:05.386Z|00610|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: no 
response to inactivity probe after 5 seconds, disconnecting
> 
2021-05-11T22:33:06.406Z|00611|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connecting...
> 
2021-05-11T22:33:06.438Z|00612|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connected
> 
2021-05-11T22:33:16.438Z|00613|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: no 
response to inactivity probe after 5 seconds, disconnecting
> 
2021-05-11T22:33:17.921Z|00614|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connecting...
> 
2021-05-11T22:33:18.108Z|00615|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connected
> 
2021-05-11T22:33:28.110Z|00616|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: no 
response to inactivity probe after 5 seconds, disconnecting
> 
2021-05-11T22:33:29.433Z|00617|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connecting...
> 
2021-05-11T22:33:29.933Z|00618|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
connected
> 
> 
> Can you please help us find out what could be wrong with this 
configuration and what is the expected behaviour from ovs switch when the 
receiver on the controller is blocked for long.

Hmm, I can't reproduce this with current OVS.  I do see a problem with
the fail-open implementation; I'll see a patch for that.

Can you show the output of "ovs-vsctl list controller"?


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs] Adding tagged interfaces to ovs-system managed interface in the system configuration.

2021-06-28 Thread Krzysztof Klimonda
Hi,

Could you elaborate on that? Is there some documentation on this interaction I 
could read? Is this a potential performance issue, or offloading issue? What 
would be a better way to configure bonding with ovs that does not bring down 
network in case of vswitchd failure?

Best Regards,
Krzysztof

On Fri, Jun 25, 2021, at 01:49, Ben Pfaff wrote:
> Linux bonds and OVS bridges don't necessarily mix well.
> 
> On Thu, Jun 24, 2021 at 10:25:58AM +0200, Krzysztof Klimonda wrote:
> > Hi,
> > 
> > I had a configuration like that in mind:
> > 
> > # ip link add bond0 type bond
> > # ip link set em1 master bond0
> > # ip link set em2 master bond0
> > # ip link add link bond0 name mgmt type vlan id 100
> > # ip link add link bond0 name ovs_tunnel type vlan id 200
> > 
> > # ovs-vsctl add-br br0
> > # ovs-vsctl add-port bond0
> > 
> > # ip link |grep bond0
> > 6: bond0:  mtu 9000 qdisc noqueue 
> > master ovs-system state UP mode DEFAULT group default qlen 1000
> > 7: mgmt@bond0:  mtu 9000 qdisc noqueue 
> > state UP mode DEFAULT group default qlen 1000
> > #
> > 
> > On Wed, Jun 23, 2021, at 18:51, Ben Pfaff wrote:
> > > On Tue, Jun 22, 2021 at 09:58:49PM +0200, Krzysztof Klimonda wrote:
> > > > Hi,
> > > > 
> > > > I have tried the following configuration for the system-level network 
> > > > in the lab:
> > > > 
> > > >
> > > >   +--vlan10@bond0
> > > > ens1--+  |  
> > > >---bond0 (ovs-system)--+--vlan20@bond0
> > > > ens2--+  |  
> > > >   +--vlan30@bond0
> > > > 
> > > > The idea is to plug bond0 into openvswitch so that I can add specific 
> > > > VLANs to my virtual topology, but push some of those VLANs into system 
> > > > without doing any specific configuration on the ovs side (for example, 
> > > > to have access to the management interface even if vswitchd is down).
> > > > 
> > > > This seems to be working fine in my lab (there is access to the 
> > > > management interface - vlan10 - even when bond0 has ovs-system as 
> > > > master), but are there any drawbacks to such a configuration?
> > > 
> > > It's hard to guess how you're implementing this.  If you're doing it
> > > with something like this:
> > > 
> > > ovs-vsctl add-port br0 ens1
> > > ovs-vsctl add-port br0 ens2
> > > ovs-vsctl add-bond br0 bond0 ens1 ens2
> > > ovs-vsctl add-port br0 vlan1 tag=1 -- set interface vlan1 
> > > type=internal
> > > ovs-vsctl add-port br0 vlan2 tag=2 -- set interface vlan2 
> > > type=internal
> > > ovs-vsctl add-port br0 vlan3 tag=3 -- set interface vlan3 
> > > type=internal
> > > 
> > > then it ought to work fine.
> > > 
> > 
> > 
> > -- 
> >   Krzysztof Klimonda
> >   kklimo...@syntaxhighlighted.com
> 


-- 
  Krzysztof Klimonda
  kklimo...@syntaxhighlighted.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] netdev-offload-tc.c: validate mask check skipped for disable megaflow

2021-06-28 Thread Massimo Pregnolato via discuss
Classification: Confidential

Hi all,

I'm quite new on OVS and I'm using it for testing some possible configuration 
in an internal project.

I started to use OVS 2.12 and all test were ok  but once I moved to release 
2.15 I've noticed some different behaviour that make fail my tests.

Mainly this is due to "ovs-appctl upcall/disable-megaflows"  configuration that 
I use on my set up.

When I use disable megaflow I'm not able to offload traffic on TC due to some 
error on check mask that are not all zero. Let me give you more details :

In file  "netdev-offload-tc.c"   has been introduced a check 
"test_key_and_mask"   that doesn't pass because some mask are not zero (i.g 
mask->dp_hash):

if (mask->dp_hash) {
VLOG_DBG_RL(, "offloading attribute dp_hash isn't supported");
return EOPNOTSUPP;
}

I've tried to skip this check and I've find another similar issue related to 
mask check on "tc.c" file under the function "nl_msg_put_flower_tunnel" where 
I've added this work around in order to offload tc flows:

 "if ((ipv4_src || ipv4_dst) && (ipv4_dst_mask || ipv4_src_mask)) {"

Adding these patches I'm able to offload TC with disable-megaflow option as I 
was with ovs 2.12.
Of course same tests with no disable-megaflow  are working properly.

So I'm wondering if this is an expected behaviour on 2.15 release where a lot 
of checks on mask have been added and if
disable-megaflows option  is still supported . Is this a known behaviour?

Thank you in advance for your support or comments.

Regards.
/Massimo

Info:

ovs-vswitchd -version:
ovs-vswitchd (Open vSwitch) 2.15.1

git rev-parse HEAD:
commit 934668c295e0b4ecbb4ec3358e1acf4e5824ea65 (origin/branch-2.15)

OS:
RHEL8.4
Kernel:
4.18.0-305.el8.x86_64



::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss