Re: [ovs-discuss] Restarting the network triggers the deletion of one ovs port

2023-10-30 Thread Liqi An via discuss
Hi ,
Ok , I will continue to consult SUSE support on this issue .

About the new workaround, I hope bond1 work as before ,

A> Old solution(bond1 in bridge: br-oam will lost when restarting network of 
host):
cluster12-b:/etc/sysconfig/network # cat ifcfg-bond1
DEVICE='bond1'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=active-backup miimon=100 use_carrier=0'
BONDING_SLAVE0='eth1'
BONDING_SLAVE1='eth5'
BOOTPROTO='static'
BORADCAST=''
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
USERCONTROL='no'
BONDING_SKIP_REMOVE_WORKAROUND='yes'
ZONE=public

# /usr/bin/ovs-vsctl add-port br-oam bond1 trunk=3932,3933

Bridge br-oam
Port "2.11-SC-2-eth1"
tag: 3932
Interface "2.11-SC-2-eth1"
Port bond1
trunks: [3932, 3933]
Interface bond1
Port "2.11-SC-2-eth2"
tag: 3933
Interface "2.11-SC-2-eth2"
Port br-oam
Interface br-oam
type: internal




B> New solution:
# ovs-vsctl add-bond br-oam bond1 eth1 eth5 trunk=3932,3933

   Bridge br-oam
Port br-oam
Interface br-oam
type: internal
Port bond1
trunks: [3932, 3933]
Interface eth5
Interface eth1
Port "2.11-SC-2-eth2"
tag: 3933
Interface "2.11-SC-2-eth2"
Port "2.11-SC-2-eth1"
tag: 3932
Interface "2.11-SC-2-eth1"

Would bond1 in solution B> work in the same way as bond1 in solution A> 
, Especially the work of eth1 
as ' BONDING_MODULE_OPTS='mode=active-backup miimon=100 use_carrier=0''? Or 
more parameter configuration is required in 
command"# ovs-vsctl add-bond br-oam bond1 eth1 eth5 trunk=3932,3933" ?


//An

-Original Message-
From: Ilya Maximets  
Sent: Monday, October 30, 2023 7:09 PM
To: Liqi An ; ovs-discuss@openvswitch.org
Cc: i.maxim...@ovn.org; Cheng Chi ; Jonas Yi 
; Yawei Lu 
Subject: Re: [ovs-discuss] Restarting the network triggers the deletion of one 
ovs port

On 10/30/23 10:28, Liqi An wrote:
> Hi ,
>   Is there any update about this issue , It's been bothering me for 
> over two weeks,thx~

Hi.  As you saw in the log, something is calling ovs-vsctl to remove the port 
from OVS:

2023-10-16T13:07:17.668420+08:00 cluster12-b ovs-vsctl: 
ovs|1|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port br-oam bond1

OVS is not doing that on its own.  You need to find what is calling this 
command in order to fix the problem.  Likely candidates are network-scripts, 
NetworkManager or something similar.

Best regards, Ilya Maximets.

> 
> //An
> 
> -Original Message-
> From: Liqi An
> Sent: Wednesday, October 18, 2023 5:13 PM
> To: Ilya Maximets ; ovs-discuss@openvswitch.org
> Cc: Cheng Chi ; Jonas Yi 
> ; Yawei Lu ; IPW AQUA 
> team (CBC) 
> Subject: RE: [ovs-discuss] Restarting the network triggers the 
> deletion of one ovs port
> 
> Hi ,
>   We added bond1 to br-oam with below command before: 
> 
> # ovs-vsctl add-port br-oam bond1 trunk=3932,3933
> 
>   In my opinion , , once it  add bond1 to the virtual switch 
> (openvswtich) successfully , this configuration should be saved into ovs db , 
> What I'm wondering is why the network management on the host actively  
> triggers the deletion, That's not supposed to be the job of SUSE15-SP5 , The 
> ovs-vsctl command is used to manually add or delete these configurations 
> before.
> 
> 
>   By the way , I tried another way , delete bond1 configuration of 
> previous host , and add eth1 as bond1 to openvswitch directly:
> WA:
> cluster12-b:/etc/sysconfig/network # ip link set bond1 down 
> cluster12-b:/etc/sysconfig/network # rm -f 
> /etc/sysconfig/network/ifcfg-bond1
> cluster12-b:/etc/sysconfig/network # service network restart 
> cluster12-b:/etc/sysconfig/network # ip link set eth1 up 
> cluster12-b:/etc/sysconfig/network # ip link set eth5 up
>  
> cluster12-b:/etc/sysconfig/network # ovs-vsctl add-bond br-oam bond1 eth1 
> eth5 trunk=3932,3933 cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
>  Bridge br-oam
>  Port br-oam
>  Interface br-oam
>  type: internal
>  Port bond1
>  trunks: [3932, 3933]
>  Interface eth1
>  Interface eth5
>  ovs_version: "2.14.2"
> cluster12-b:/etc/sysconfig/network # service network restart 
> cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
>  Bridge br-oam
>  Port br-oam
>  Interface br-oam
>  type: internal
>  Port bond1
>  trunks: [3932, 3933]
>  Interface eth1
>  Interface eth5
>  ovs_version: "2.14.2"
> 
> After initial testing, this scheme works temporarily and does not lose the 
> relevant network configuration after restarting the network and 

Re: [ovs-discuss] OVS & OVN HWOL with Nvidia ConnectX-6 Dx - Kernel flower acknowledgment does not match request

2023-10-30 Thread Eelco Chaudron via discuss


On 30 Oct 2023, at 15:08, Ilya Maximets wrote:

> On 10/26/23 14:05, Odintsov Vladislav wrote:
>> Hi,
>>
>>> On 19 Oct 2023, at 17:06, Vladislav Odintsov via discuss 
>>>  wrote:
>>>
>>>
>>>
 On 18 Oct 2023, at 18:43, Ilya Maximets via discuss 
  wrote:

 On 10/18/23 16:24, Vladislav Odintsov wrote:
> Hi Ilya,
>
> thanks for your response!
>
>> On 18 Oct 2023, at 15:59, Ilya Maximets via discuss 
>>  wrote:
>>
>> On 10/17/23 16:30, Vladislav Odintsov via discuss wrote:
>>> Hi,
>>>
>>> I’m testing OVS hardware offload with tc flower with Mellanox/NVidia 
>>> ConnectX-6 Dx smartnic and see next warning in ovs-vswitchd log:
>>>
>>> 2023-10-17T14:23:15.116Z|00386|tc(handler20)|WARN|Kernel flower 
>>> acknowledgment does not match request!  Set dpif_netlink to dbg to see 
>>> which rule caused this error.
>>>
>>> With dpif_netlink debug logs enabled, after this message appears two 
>>> additional lines:
>>>
>>> 2023-10-17T14:23:15.117Z|00387|dpif_netlink(handler20)|DBG|added flow
>>> 2023-10-17T14:23:15.117Z|00388|dpif_netlink(handler20)|DBG|system@ovs-system:
>>>  put[create] ufid:d8a3ab6d-77d1-4574-8bbf-634b01a116f3 
>>> recirc_id(0),dp_hash(0/0),skb_priority(0/0),tunnel(tun_id=0x10,src=10.1.0.105,dst=10.1.0.109,ttl=64/0,tp_src=59507/0,tp_dst=6081/0,geneve({class=0x102,type=0x80,len=4,0x60002}),flags(-df+csum+key)),in_port(4),skb_mark(0/0),ct_state(0/0x2f),ct_zone(0/0),ct_mark(0/0),ct_label(0/0x3),eth(src=00:00:ba:a4:6e:ad,dst=00:01:ba:a4:6e:ad),eth_type(0x0800),ipv4(src=172.32.2.4/0.0.0.0,dst=172.32.1.4/0.0.0.0,proto=1,tos=0/0x3,ttl=63/0,frag=no),icmp(type=8/0,code=0/0),
>>>  
>>> actions:set(tunnel(tun_id=0xff0011,src=10.1.0.109,dst=10.1.1.18,ttl=64,tp_src=59507,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x18000b}),flags(df|csum|key))),4
>>>
>>
>> Could you also enable debug logs for 'tc' module in OVS?
>> It shoudl give more infomation about where exactly is the
>> difference between what OVS asked for and what the kenrel
>> reported back.
>>
>> In general this warning typically signifies a kernel bug,
>> but it could be that OVS doesn't format something correctly
>> as well.
>
> With enabled tc logs I see mismatches in expected/real keys and actions:
>
> 2023-10-18T13:33:35.882Z|00118|tc(handler21)|DBG|tc flower compare failed 
> action compare
> Expected Mask:
>   ff ff 00 00 ff ff ff ff-ff ff ff ff ff ff ff ff
> 0030  00 00 2f 00 00 00 00 00-00 00 00 00 00 00 00 00
> 0040  03 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 0050  00 00 00 00 ff ff ff ff-00 00 00 00 00 00 00 00
> 0060  00 00 00 00 ff 00 00 00-00 00 00 00 00 00 00 00
> 0090  00 00 00 00 00 00 00 00-ff ff ff ff ff ff ff ff
> 00c0  ff 00 00 00 ff ff 00 00-ff ff ff ff ff ff ff ff
> 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 00e0  ff ff ff 01 ff ff ff ff-00 00 00 00 00 00 00 00
>
> Received Mask:
>   ff ff 00 00 ff ff ff ff-ff ff ff ff ff ff ff ff
> 0030  00 00 2f 00 00 00 00 00-00 00 00 00 00 00 00 00
> 0040  03 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 0050  00 00 00 00 ff ff ff ff-00 00 00 00 00 00 00 00
> 0060  00 00 00 00 ff 00 00 00-00 00 00 00 00 00 00 00
> 0090  00 00 00 00 00 00 00 00-ff ff ff ff ff ff ff ff
> 00c0  ff 00 00 00 ff ff 00 00-ff ff ff ff ff ff ff ff
> 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 00e0  ff ff ff 01 ff ff ff ff-00 00 00 00 00 00 00 00
>
> Expected Key:
>   08 06 00 00 ff ff ff ff-ff ff 00 00 ba a4 6e ad
> 0050  a9 fe 64 01 a9 fe 64 03-00 00 ba a4 6e ad 00 00  <— mismatch in 
> this line
> 0060  00 00 00 00 01 00 00 00-00 00 00 00 00 00 00 00
> 0090  00 00 00 00 00 00 00 00-0a 01 00 68 0a 01 00 6d
> 00c0  00 40 c0 5b 17 c1 00 00-00 00 00 00 00 00 00 10  <— mismatch in 
> this line
> 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 00e0  01 02 80 01 00 03 00 02-00 00 00 00 00 00 00 00
>
> Received Key:
>   08 06 00 00 ff ff ff ff-ff ff 00 00 ba a4 6e ad
> 0050  00 00 00 00 a9 fe 64 03-00 00 00 00 00 00 00 00  <— mismatch in 
> this line
> 0060  00 00 00 00 01 00 00 00-00 00 00 00 00 00 00 00
> 0090  00 00 00 00 00 00 00 00-0a 01 00 68 0a 01 00 6d
> 00c0  00 00 00 00 17 c1 00 00-00 00 00 00 00 00 00 10  <— mismatch in 
> this line
> 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
> 00e0  01 02 80 01 00 03 00 02-00 00 00 00 00 00 00 00

 These are not very important, it is expected that the kernel clears out
 fields that are not coverd by a mask.  We do not have the difference
 in the masks and we do not have a diference in the masked keys, so that
 is 

Re: [ovs-discuss] OVS & OVN HWOL with Nvidia ConnectX-6 Dx - Kernel flower acknowledgment does not match request

2023-10-30 Thread Ilya Maximets via discuss
On 10/26/23 14:05, Odintsov Vladislav wrote:
> Hi,
> 
>> On 19 Oct 2023, at 17:06, Vladislav Odintsov via discuss 
>>  wrote:
>>
>>
>>
>>> On 18 Oct 2023, at 18:43, Ilya Maximets via discuss 
>>>  wrote:
>>>
>>> On 10/18/23 16:24, Vladislav Odintsov wrote:
 Hi Ilya,

 thanks for your response!

> On 18 Oct 2023, at 15:59, Ilya Maximets via discuss 
>  wrote:
>
> On 10/17/23 16:30, Vladislav Odintsov via discuss wrote:
>> Hi,
>>
>> I’m testing OVS hardware offload with tc flower with Mellanox/NVidia 
>> ConnectX-6 Dx smartnic and see next warning in ovs-vswitchd log:
>>
>> 2023-10-17T14:23:15.116Z|00386|tc(handler20)|WARN|Kernel flower 
>> acknowledgment does not match request!  Set dpif_netlink to dbg to see 
>> which rule caused this error.
>>
>> With dpif_netlink debug logs enabled, after this message appears two 
>> additional lines:
>>
>> 2023-10-17T14:23:15.117Z|00387|dpif_netlink(handler20)|DBG|added flow
>> 2023-10-17T14:23:15.117Z|00388|dpif_netlink(handler20)|DBG|system@ovs-system:
>>  put[create] ufid:d8a3ab6d-77d1-4574-8bbf-634b01a116f3 
>> recirc_id(0),dp_hash(0/0),skb_priority(0/0),tunnel(tun_id=0x10,src=10.1.0.105,dst=10.1.0.109,ttl=64/0,tp_src=59507/0,tp_dst=6081/0,geneve({class=0x102,type=0x80,len=4,0x60002}),flags(-df+csum+key)),in_port(4),skb_mark(0/0),ct_state(0/0x2f),ct_zone(0/0),ct_mark(0/0),ct_label(0/0x3),eth(src=00:00:ba:a4:6e:ad,dst=00:01:ba:a4:6e:ad),eth_type(0x0800),ipv4(src=172.32.2.4/0.0.0.0,dst=172.32.1.4/0.0.0.0,proto=1,tos=0/0x3,ttl=63/0,frag=no),icmp(type=8/0,code=0/0),
>>  
>> actions:set(tunnel(tun_id=0xff0011,src=10.1.0.109,dst=10.1.1.18,ttl=64,tp_src=59507,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x18000b}),flags(df|csum|key))),4
>>
>
> Could you also enable debug logs for 'tc' module in OVS?
> It shoudl give more infomation about where exactly is the
> difference between what OVS asked for and what the kenrel
> reported back.
>
> In general this warning typically signifies a kernel bug,
> but it could be that OVS doesn't format something correctly
> as well.

 With enabled tc logs I see mismatches in expected/real keys and actions:

 2023-10-18T13:33:35.882Z|00118|tc(handler21)|DBG|tc flower compare failed 
 action compare
 Expected Mask:
   ff ff 00 00 ff ff ff ff-ff ff ff ff ff ff ff ff
 0030  00 00 2f 00 00 00 00 00-00 00 00 00 00 00 00 00
 0040  03 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 0050  00 00 00 00 ff ff ff ff-00 00 00 00 00 00 00 00
 0060  00 00 00 00 ff 00 00 00-00 00 00 00 00 00 00 00
 0090  00 00 00 00 00 00 00 00-ff ff ff ff ff ff ff ff
 00c0  ff 00 00 00 ff ff 00 00-ff ff ff ff ff ff ff ff
 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 00e0  ff ff ff 01 ff ff ff ff-00 00 00 00 00 00 00 00

 Received Mask:
   ff ff 00 00 ff ff ff ff-ff ff ff ff ff ff ff ff
 0030  00 00 2f 00 00 00 00 00-00 00 00 00 00 00 00 00
 0040  03 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 0050  00 00 00 00 ff ff ff ff-00 00 00 00 00 00 00 00
 0060  00 00 00 00 ff 00 00 00-00 00 00 00 00 00 00 00
 0090  00 00 00 00 00 00 00 00-ff ff ff ff ff ff ff ff
 00c0  ff 00 00 00 ff ff 00 00-ff ff ff ff ff ff ff ff
 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 00e0  ff ff ff 01 ff ff ff ff-00 00 00 00 00 00 00 00

 Expected Key:
   08 06 00 00 ff ff ff ff-ff ff 00 00 ba a4 6e ad
 0050  a9 fe 64 01 a9 fe 64 03-00 00 ba a4 6e ad 00 00  <— mismatch in 
 this line
 0060  00 00 00 00 01 00 00 00-00 00 00 00 00 00 00 00
 0090  00 00 00 00 00 00 00 00-0a 01 00 68 0a 01 00 6d
 00c0  00 40 c0 5b 17 c1 00 00-00 00 00 00 00 00 00 10  <— mismatch in 
 this line
 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 00e0  01 02 80 01 00 03 00 02-00 00 00 00 00 00 00 00

 Received Key:
   08 06 00 00 ff ff ff ff-ff ff 00 00 ba a4 6e ad
 0050  00 00 00 00 a9 fe 64 03-00 00 00 00 00 00 00 00  <— mismatch in 
 this line
 0060  00 00 00 00 01 00 00 00-00 00 00 00 00 00 00 00
 0090  00 00 00 00 00 00 00 00-0a 01 00 68 0a 01 00 6d
 00c0  00 00 00 00 17 c1 00 00-00 00 00 00 00 00 00 10  <— mismatch in 
 this line
 00d0  08 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00
 00e0  01 02 80 01 00 03 00 02-00 00 00 00 00 00 00 00
>>>
>>> These are not very important, it is expected that the kernel clears out
>>> fields that are not coverd by a mask.  We do not have the difference
>>> in the masks and we do not have a diference in the masked keys, so that
>>> is fine.
>>>

 Expected Masked Key:
   08 06 00 00 ff ff ff ff-ff ff 00 00 ba a4 6e ad
 0050  00 00 00 00 a9 fe 64 03-00 00 00 00 

Re: [ovs-discuss] Restarting the network triggers the deletion of one ovs port

2023-10-30 Thread Ilya Maximets via discuss
On 10/30/23 10:28, Liqi An wrote:
> Hi ,
>   Is there any update about this issue , It's been bothering me for over 
> two weeks,thx~

Hi.  As you saw in the log, something is calling ovs-vsctl to remove
the port from OVS:

2023-10-16T13:07:17.668420+08:00 cluster12-b ovs-vsctl: 
ovs|1|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port br-oam bond1

OVS is not doing that on its own.  You need to find what is calling
this command in order to fix the problem.  Likely candidates are
network-scripts, NetworkManager or something similar.

Best regards, Ilya Maximets.

> 
> //An
> 
> -Original Message-
> From: Liqi An 
> Sent: Wednesday, October 18, 2023 5:13 PM
> To: Ilya Maximets ; ovs-discuss@openvswitch.org
> Cc: Cheng Chi ; Jonas Yi ; 
> Yawei Lu ; IPW AQUA team (CBC) 
> 
> Subject: RE: [ovs-discuss] Restarting the network triggers the deletion of 
> one ovs port
> 
> Hi ,
>   We added bond1 to br-oam with below command before: 
> 
> # ovs-vsctl add-port br-oam bond1 trunk=3932,3933
> 
>   In my opinion , , once it  add bond1 to the virtual switch 
> (openvswtich) successfully , this configuration should be saved into ovs db , 
> What I'm wondering is why the network management on the host actively  
> triggers the deletion, That's not supposed to be the job of SUSE15-SP5 , The 
> ovs-vsctl command is used to manually add or delete these configurations 
> before.
> 
> 
>   By the way , I tried another way , delete bond1 configuration of 
> previous host , and add eth1 as bond1 to openvswitch directly:
> WA:
> cluster12-b:/etc/sysconfig/network # ip link set bond1 down 
> cluster12-b:/etc/sysconfig/network # rm -f /etc/sysconfig/network/ifcfg-bond1
> cluster12-b:/etc/sysconfig/network # service network restart 
> cluster12-b:/etc/sysconfig/network # ip link set eth1 up 
> cluster12-b:/etc/sysconfig/network # ip link set eth5 up
>  
> cluster12-b:/etc/sysconfig/network # ovs-vsctl add-bond br-oam bond1 eth1 
> eth5 trunk=3932,3933 cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
>  Bridge br-oam
>  Port br-oam
>  Interface br-oam
>  type: internal
>  Port bond1
>  trunks: [3932, 3933]
>  Interface eth1
>  Interface eth5
>  ovs_version: "2.14.2"
> cluster12-b:/etc/sysconfig/network # service network restart 
> cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
>  Bridge br-oam
>  Port br-oam
>  Interface br-oam
>  type: internal
>  Port bond1
>  trunks: [3932, 3933]
>  Interface eth1
>  Interface eth5
>  ovs_version: "2.14.2"
> 
> After initial testing, this scheme works temporarily and does not lose the 
> relevant network configuration after restarting the network and host ,But I 
> not sure  how eth1 works in this configuration , I hope them work as 
> before:
> 
> cluster12-b:~ # cat /etc/sysconfig/network/ifcfg-bond1
> DEVICE='bond1'
> BORADCAST=''
> NETWORK=''
> USERCONTROL='no'
> BONDING_SKIP_REMOVE_WORKAROUND='yes'
> BOOTPROTO='static'
> STARTMODE='auto'
> ZONE='public'
> BONDING_MASTER='yes'
> BONDING_SLAVE0='eth5'
> BONDING_SLAVE1='eth1'
> BONDING_MODULE_OPTS='mode=active-backup miimon=100 use_carrier=0'
> 
> do you have any idea about this solution or give some advice, thx ~
> 
> //An
> 
> -Original Message-
> From: Ilya Maximets 
> Sent: Tuesday, October 17, 2023 7:34 PM
> To: Liqi An ; ovs-discuss@openvswitch.org
> Cc: i.maxim...@ovn.org; Cheng Chi ; Jonas Yi 
> ; Yawei Lu 
> Subject: Re: [ovs-discuss] Restarting the network triggers the deletion of 
> one ovs port
> 
> On 10/17/23 07:53, Liqi An wrote:
>> Hi experts ,
>>  I simplified the issue repetition steps ,
>>
>> cluster12-b: # cat ovs-network.xml
>> 
>> 
>> 2.11-ovs-network
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> cluster12-b: # virsh list --all
>>  Id   Name   State
>> 
>>
>> cluster12-b: # virsh net-list --all
>>  Name   State   Autostart   Persistent
>> 
>>
>> cluster12-b: # virsh net-define ovs-network.xml Network 
>> 2.11-ovs-network defined from ovs-network.xml
>>
>> cluster12-b: # virsh net-list --all
>>  Name   State  Autostart   Persistent
>> ---
>>  2.11-ovs-network   inactive   no  yes
>>
>> cluster12-b: # virsh net-start 2.11-ovs-network Network 
>> 2.11-ovs-network started
>>
>> cluster12-b: # virsh net-list --all
>>  Name   StateAutostart   Persistent
>> -
>>  2.11-ovs-network   active   no  yes
>>
>> cluster12-b: # ovs-vsctl show

Re: [ovs-discuss] Restarting the network triggers the deletion of one ovs port

2023-10-30 Thread Liqi An via discuss
Hi ,
Is there any update about this issue , It's been bothering me for over 
two weeks,thx~

//An

-Original Message-
From: Liqi An 
Sent: Wednesday, October 18, 2023 5:13 PM
To: Ilya Maximets ; ovs-discuss@openvswitch.org
Cc: Cheng Chi ; Jonas Yi ; Yawei 
Lu ; IPW AQUA team (CBC) 

Subject: RE: [ovs-discuss] Restarting the network triggers the deletion of one 
ovs port

Hi ,
We added bond1 to br-oam with below command before: 

# ovs-vsctl add-port br-oam bond1 trunk=3932,3933

In my opinion , , once it  add bond1 to the virtual switch 
(openvswtich) successfully , this configuration should be saved into ovs db , 
What I'm wondering is why the network management on the host actively  triggers 
the deletion, That's not supposed to be the job of SUSE15-SP5 , The ovs-vsctl 
command is used to manually add or delete these configurations before.


By the way , I tried another way , delete bond1 configuration of 
previous host , and add eth1 as bond1 to openvswitch directly:
WA:
cluster12-b:/etc/sysconfig/network # ip link set bond1 down 
cluster12-b:/etc/sysconfig/network # rm -f /etc/sysconfig/network/ifcfg-bond1
cluster12-b:/etc/sysconfig/network # service network restart 
cluster12-b:/etc/sysconfig/network # ip link set eth1 up 
cluster12-b:/etc/sysconfig/network # ip link set eth5 up
 
cluster12-b:/etc/sysconfig/network # ovs-vsctl add-bond br-oam bond1 eth1 eth5 
trunk=3932,3933 cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
2e9bf291-50ac-4c3a-ac55-2d590df1880d
 Bridge br-oam
 Port br-oam
 Interface br-oam
 type: internal
 Port bond1
 trunks: [3932, 3933]
 Interface eth1
 Interface eth5
 ovs_version: "2.14.2"
cluster12-b:/etc/sysconfig/network # service network restart 
cluster12-b:/etc/sysconfig/network # ovs-vsctl show 
2e9bf291-50ac-4c3a-ac55-2d590df1880d
 Bridge br-oam
 Port br-oam
 Interface br-oam
 type: internal
 Port bond1
 trunks: [3932, 3933]
 Interface eth1
 Interface eth5
 ovs_version: "2.14.2"

After initial testing, this scheme works temporarily and does not lose the 
relevant network configuration after restarting the network and host ,But I not 
sure  how eth1 works in this configuration , I hope them work as before:

cluster12-b:~ # cat /etc/sysconfig/network/ifcfg-bond1
DEVICE='bond1'
BORADCAST=''
NETWORK=''
USERCONTROL='no'
BONDING_SKIP_REMOVE_WORKAROUND='yes'
BOOTPROTO='static'
STARTMODE='auto'
ZONE='public'
BONDING_MASTER='yes'
BONDING_SLAVE0='eth5'
BONDING_SLAVE1='eth1'
BONDING_MODULE_OPTS='mode=active-backup miimon=100 use_carrier=0'

do you have any idea about this solution or give some advice, thx ~

//An

-Original Message-
From: Ilya Maximets 
Sent: Tuesday, October 17, 2023 7:34 PM
To: Liqi An ; ovs-discuss@openvswitch.org
Cc: i.maxim...@ovn.org; Cheng Chi ; Jonas Yi 
; Yawei Lu 
Subject: Re: [ovs-discuss] Restarting the network triggers the deletion of one 
ovs port

On 10/17/23 07:53, Liqi An wrote:
> Hi experts ,
>   I simplified the issue repetition steps ,
> 
> cluster12-b: # cat ovs-network.xml
> 
> 
> 2.11-ovs-network
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> cluster12-b: # virsh list --all
>  Id   Name   State
> 
> 
> cluster12-b: # virsh net-list --all
>  Name   State   Autostart   Persistent
> 
> 
> cluster12-b: # virsh net-define ovs-network.xml Network 
> 2.11-ovs-network defined from ovs-network.xml
> 
> cluster12-b: # virsh net-list --all
>  Name   State  Autostart   Persistent
> ---
>  2.11-ovs-network   inactive   no  yes
> 
> cluster12-b: # virsh net-start 2.11-ovs-network Network 
> 2.11-ovs-network started
> 
> cluster12-b: # virsh net-list --all
>  Name   StateAutostart   Persistent
> -
>  2.11-ovs-network   active   no  yes
> 
> cluster12-b: # ovs-vsctl show
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
> ovs_version: "2.14.2"
> cluster12-b: # ovs-vsctl add-br br-oam
> cluster12-b: # ovs-vsctl show
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
> Bridge br-oam
> Port br-oam
> Interface br-oam
> type: internal
> ovs_version: "2.14.2"
> cluster12-b: # ovs-vsctl add-port br-oam bond1 trunk=3932,3933
> cluster12-b: # ovs-vsctl show
> 2e9bf291-50ac-4c3a-ac55-2d590df1880d
> Bridge br-oam
> Port br-oam
> Interface br-oam
> type: internal
> Port bond1
> trunks: [3932, 3933]
> Interface bond1
> ovs_version: "2.14.2"
> cluster12-b: # date
> Tue Oct 17