[ovirt-users]Re: Bond Mode 1 (Active-Backup),vm unreachable for minutes when bond link change

2019-06-06 Thread Edward Haas
On Sat, May 25, 2019 at 5:06 AM  wrote:

> Hello,
>
> I've a problem, all my ovirt hosts and vms are linked with a bonding mode
> 1(Active-Backup)2x10Gbps
> ovirt version:4.3
> topology:
>--eno2
> vm--ovirtmgmt--bond0---eno1
>
> ifcfg-bond0:
> # Generated by VDSM version 4.30.9.1
> DEVICE=bond0
> BONDING_OPTIOS='mode=1 miion=100'
> BRIDGE=ovirtmgmt
> MACADDR=a4:be:26:16:e9:b2
> ONBOOT=yes
> MTU=1500
> DEFROUTE=no
> NM_CONTROLLER=no
> IPV6INIT=no
>
> ifcfg-eno1:
> # Generated by VDSM version 4.30.9.1
> DEVICE=eno1
> MASTER=bond0
> SLAVE=yes
> ONBOOT=yes
> MTU=1500
> DEFROUTE=no
> NM_CONTROLLER=no
> IPV6INIT=no
>
> ifcfg-eno2:
> # Generated by VDSM version 4.30.9.1
> DEVICE=eno2
> MASTER=bond0
> SLAVE=yes
> ONBOOT=yes
> MTU=1500
> DEFROUTE=no
> NM_CONTROLLER=no
> IPV6INIT=no
>
> ifcfg-ovirtmgmt:
> # Generated by VDSM version 4.30.9.1
> DEVICE=ovirtmgmt
> TYPE=Brodge
> DELAY=0
> STP=off
> ONBOOT=yes
> IPADDR=x.x.x.x
> NEYMASK=255.255.255.0
> GATEWAY=x.x.x.x
> BOOTPROTO=none
> MTU=1500
> DEFROUTE=yes
> NM_CONTROLLER=no
> IPV6INIT=yes
> IPV6_AUTOCONF=yes
>
>
> cat /proc/net/bonding/bond0
> Ethernet Chanel Bonding Driver:v3.7.1(April 27, 2011)
>
> Bonding Mode:fault-tolerance(active-ackup)
> Primary Slave:none
> Currently Active Slave:eno1
> MII Status:up
> MII Polling Intercal (ms):100
> Up Delay (ms) : 0
> Down Delay (ms) : 0
>
> Slave Interface :eno1
> MII Status:up
> Speed : 1 Mbps
> Link Failure Count : 0
> Permanent HW addr :a4:be:26:16:e9:b2
> Slave queue ID: 0
>
> Slave Interface :eno2
> MII Status:up
> Speed : 1 Mbps
> Link Failure Count : 0
> Permanent HW addr :a4:be:26:16:e9:b2
> Slave queue ID: 0
>
> ping vm from different subnet.
>
> Eveything is okay if I don't change bond link interface。When I unplug
> Currently Active Slave eno1,bond link change to eno2 as expected but vm
> become unreachable until external physical switch MAC Table ageing time
> expired.It seems that vm doesn't sent gratuitous ARP when bond link change.
> How can I fix if?
>

There is no reason for the VM OS to send anything as it is unaware of the
change you have done in the network.
It should work fine if you perform this operation from oVirt Management, as
it will cause the interfaces to be set down and up again (I would expect
the links to go down as a result), causing the switch ports to flush its
mac address table.


> vm os is Centos 7.5
> ovirt version 4.2 also tested.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUC67VZ7WNW5M4L7IBBDIUZKK7SRLMLQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FAAET3PUF5OWWH7GUA72G3ICWK3MLSRU/


[ovirt-users]Re: Bond Mode 1 (Active-Backup),vm unreachable for minutes when bond link change

2019-05-26 Thread henaumars
Glade to hear you, sorry for so much spelling mistakes。

I update my vm os to cnetos7.6 and change my bond configuretion as:
ifcfg-bond0:
# Generated by VDSM version 4.30.9.1
DEVICE=bond0
BONDING_OPTIOS='mode=1 miion=100 downdelay=200 updelay=200'
BRIDGE=ovirtmgmt
MACADDR=a4:be:26:16:e9:b2
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

And there are no 'ifcfg-XXX.bkp' in the network-scripts folder。
But the vm still unreachable when bond link change。

When I plug the second NIC out,the message puts:
localhost  kernel: bond0 :Releasing backup interface eno1
localhost  kernel: device eno1 left promiscuous mode
localhost  kernel: bond0 : making interface eno2 the new active one
localhost  kernel:  device eno2 entered promiscuous mode
localhost  kernel: i40e :1a:00.0 eno1: returing to hw mac address 
a4:be:26:16:e9:b1
localhost  lldpad: recvfrom(Event interface) : No buffer space availabe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NN4FKOTERWXFEYF2ROGFJPLNXB53SN3N/


[ovirt-users]Re: Bond Mode 1 (Active-Backup),vm unreachable for minutes when bond link change

2019-05-25 Thread Strahil Nikolov
On May 25, 2019 5:04:33 AM GMT+03:00, henaum...@sina.com wrote:
>Hello, 
>
>I've a problem, all my ovirt hosts and vms are linked with a bonding
>mode 1(Active-Backup)2x10Gbps 
>ovirt version:4.3
>topology:
>   --eno2  
>vm--ovirtmgmt--bond0---eno1
>
>ifcfg-bond0:
># Generated by VDSM version 4.30.9.1
>DEVICE=bond0
>BONDING_OPTIOS='mode=1 miion=100'
>BRIDGE=ovirtmgmt
>MACADDR=a4:be:26:16:e9:b2
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no
Shouldn't it be 'NM_CONTROLLED' ?


>ifcfg-eno1:
># Generated by VDSM version 4.30.9.1
>DEVICE=eno1
>MASTER=bond0
>SLAVE=yes
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no

Shouldn't it be 'NM_CONTROLLED' ?

>ifcfg-eno2:
># Generated by VDSM version 4.30.9.1
>DEVICE=eno2
>MASTER=bond0
>SLAVE=yes
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no

Shouldn't it be 'NM_CONTROLLED' ?

>ifcfg-ovirtmgmt:
># Generated by VDSM version 4.30.9.1
>DEVICE=ovirtmgmt
>TYPE=Brodge
>DELAY=0
>STP=off
>ONBOOT=yes
>IPADDR=x.x.x.x
>NEYMASK=255.255.255.0
>GATEWAY=x.x.x.x
>BOOTPROTO=none
>MTU=1500
>DEFROUTE=yes
>NM_CONTROLLER=no
>IPV6INIT=yes
>IPV6_AUTOCONF=yes
>
Shouldn't it be 'TYPE=BRIDGE' ?
Also,
Check that there are no 'ifcfg-XXX.bkp'
in the folder as the network script will read it.If there is  any -move it to 
/root.

>cat /proc/net/bonding/bond0
>Ethernet Chanel Bonding Driver:v3.7.1(April 27, 2011)
>
>Bonding Mode:fault-tolerance(active-ackup)
>Primary Slave:none
>Currently Active Slave:eno1
>MII Status:up
>MII Polling Intercal (ms):100
>Up Delay (ms) : 0
>Down Delay (ms) : 0
>
>Slave Interface :eno1
>MII Status:up
>Speed : 1 Mbps
>Link Failure Count : 0
>Permanent HW addr :a4:be:26:16:e9:b2
>Slave queue ID: 0
>
>Slave Interface :eno2
>MII Status:up
>Speed : 1 Mbps
>Link Failure Count : 0
>Permanent HW addr :a4:be:26:16:e9:b2
>Slave queue ID: 0
As you have a bridge, maybe setting a delay might help.
What is the output once you plug the second NIC out?

>ping vm from different subnet.
>
>Eveything is okay if I don't change bond link interface。When I unplug 
>Currently Active Slave eno1,bond link change to eno2 as expected but vm
>become unreachable until external physical switch MAC Table ageing time
>expired.It seems that vm doesn't sent gratuitous ARP when bond link
>change. How can I fix if?
>
>vm os is Centos 7.5
>ovirt version 4.2 also tested.

CentOS 7.5  is quite old and could have some bugs - consider updating !!!

Best Regards,
Strahil Nikolov
Check inline.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PB2FHGQZNC5CZAI23AXGN4BL66TF6SY/