Hello All,

We are using CentOS 7.3 with pacemaker in order to create a cluster.
Each cluster node ha a bonding interface consists of two nics.
The cluster has an IPAddr2 resource configured like that:

# pcs resource show cluster_vip
Resource: cluster_vip (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=192.168.1.3
  Operations: start interval=0s timeout=20s (cluster_vip -start-interval-0s)
              stop interval=0s timeout=20s (cluster_vip -stop-interval-0s)
              monitor interval=30s (cluster_vip -monitor-interval-30s)


We are running tests and want to simulate a state when the network links are 
down.
We are pulling both network cables from the server.

The problem is that the resource is not marked as failed, and the faulted node 
keep holding it and does not fail it over to the other node.
I think that the problem is within the bond interface. The bond interface is 
marked as UP on the OS. It even can ping itself:

# ip link show
2: eno3: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master 
bond1 state DOWN mode DEFAULT qlen 1000
    link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff
3: eno4: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master 
bond1 state DOWN mode DEFAULT qlen 1000
    link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff
9: bond1: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue 
state DOWN mode DEFAULT qlen 1000
    link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff

As far as I understand the IPaddr2 RA does not check the link state of the 
interface - What can be done?

BTW, I tried to find a solution on the bonding configuration which disables the 
bond when no link is up, but I didn't find any.

Tomer.
_______________________________________________
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to