Re: [ClusterLabs] Mysql slave did not start replication after failure, and read-only IP also remained active on the much outdated slave

2016-08-25 Thread Ken Gaillot
On 08/22/2016 03:56 PM, Attila Megyeri wrote:
> Hi Ken,
> 
> Thanks a lot for your feedback, my answers are inline.
> 
> 
> 
>> -Original Message-
>> From: Ken Gaillot [mailto:kgail...@redhat.com]
>> Sent: Monday, August 22, 2016 4:12 PM
>> To: users@clusterlabs.org
>> Subject: Re: [ClusterLabs] Mysql slave did not start replication after 
>> failure,
>> and read-only IP also remained active on the much outdated slave
>>
>> On 08/22/2016 07:24 AM, Attila Megyeri wrote:
>>> Hi Andrei,
>>>
>>> I waited several hours, and nothing happened.
>>
>> And actually, we can see from the configuration you provided that
>> cluster-recheck-interval is 2 minutes.
>>
>> I don't see anything about stonith; is it enabled and tested? This looks
>> like a situation where stonith would come into play. I know that power
>> fencing can be rough on a MySQL database, but perhaps intelligent
>> switches with network fencing would be appropriate.
> 
> Yes, there is no stonith in place because we found it too agressive for this 
> purpose. And to be honest I'm not sure if that would have worked here.

The problem is, in a situation like this where corosync communication is
broken, both nodes will think they are the only surviving node, and
bring up all resources. Network fencing would isolate one of the nodes
so at least it can't cause any serious trouble.

>> The "Corosync main process was not scheduled" message is the start of
>> the trouble. It means the system was overloaded and corosync didn't get
>> any CPU time, so it couldn't maintain cluster communication.
>>
>  
> True, this was the cause of the issue, but we still couldn't find a solution 
> to get rid of the original problem.
> Nevertheless I think that the issue is that the RA did not properly detect 
> the state of mysql.
> 
> 
>> Probably the most useful thing would be to upgrade to a recent version
>> of corosync+pacemaker+resource-agents. Recent corosync versions run with
>> realtime priority, which makes this much less likely.
>>
>> Other than that, figure out what the load issue was, and try to prevent
>> it from recurring.
>>
> 
> Whereas the original problem might have been caused by corosync CPU issue, I 
> am sure that once the load was gone the proper mysql state should have been 
> detected.
> The RA, responsible for this is almost the latest version, and I did not see 
> any changes related to this functionality.
> 
>> I'm not familiar enough with the RA to comment on its behavior. If you
>> think it's suspect, check the logs during the incident for messages from
>> the RA.
>>
> 
> So did I, but there are very few details logged while this was happening, so 
> I am pretty much stuck :(
> 
> I thought that someone might have a clue what is wrong in the RA  - that 
> causes this fake state detection.
> (Un)fortunately I cannot reproduce this situation for now.

Perhaps with the split-brain, the slave tried to come up as master?

>  
> Who could help me in troubleshooting this?
> 
> Thanks,
> Attila
> 
> 
> 
> 
>>> I assume that the RA does not treat this case properly. Mysql was running,
>> but the "show slave status" command returned something that the RA was
>> not prepared to parse, and instead of reporting a non-readable attribute, it
>> returned some generic error, that did not stop the server.
>>>
>>> Rgds,
>>> Attila
>>>
>>>
>>> -Original Message-
>>> From: Andrei Borzenkov [mailto:arvidj...@gmail.com]
>>> Sent: Monday, August 22, 2016 11:42 AM
>>> To: Cluster Labs - All topics related to open-source clustering welcomed
>> <users@clusterlabs.org>
>>> Subject: Re: [ClusterLabs] Mysql slave did not start replication after 
>>> failure,
>> and read-only IP also remained active on the much outdated slave
>>>
>>> On Mon, Aug 22, 2016 at 12:18 PM, Attila Megyeri
>>> <amegy...@minerva-soft.com> wrote:
>>>> Dear community,
>>>>
>>>>
>>>>
>>>> A few days ago we had an issue in our Mysql M/S replication cluster.
>>>>
>>>> We have a one R/W Master, and a one RO Slave setup. RO VIP is
>> supposed to be
>>>> running on the slave if it is not too much behind the master, and if any
>>>> error occurs, RO VIP is moved to the master.
>>>>
>>>>
>>>>
>>>> Something happened with the slave Mysql (some disk issue, still
>>>> investigating), but the proble

Re: [ClusterLabs] Mysql slave did not start replication after failure, and read-only IP also remained active on the much outdated slave

2016-08-22 Thread Ken Gaillot
On 08/22/2016 07:24 AM, Attila Megyeri wrote:
> Hi Andrei,
> 
> I waited several hours, and nothing happened. 

And actually, we can see from the configuration you provided that
cluster-recheck-interval is 2 minutes.

I don't see anything about stonith; is it enabled and tested? This looks
like a situation where stonith would come into play. I know that power
fencing can be rough on a MySQL database, but perhaps intelligent
switches with network fencing would be appropriate.

The "Corosync main process was not scheduled" message is the start of
the trouble. It means the system was overloaded and corosync didn't get
any CPU time, so it couldn't maintain cluster communication.

Probably the most useful thing would be to upgrade to a recent version
of corosync+pacemaker+resource-agents. Recent corosync versions run with
realtime priority, which makes this much less likely.

Other than that, figure out what the load issue was, and try to prevent
it from recurring.

I'm not familiar enough with the RA to comment on its behavior. If you
think it's suspect, check the logs during the incident for messages from
the RA.

> I assume that the RA does not treat this case properly. Mysql was running, 
> but the "show slave status" command returned something that the RA was not 
> prepared to parse, and instead of reporting a non-readable attribute, it 
> returned some generic error, that did not stop the server. 
> 
> Rgds,
> Attila
> 
> 
> -Original Message-
> From: Andrei Borzenkov [mailto:arvidj...@gmail.com] 
> Sent: Monday, August 22, 2016 11:42 AM
> To: Cluster Labs - All topics related to open-source clustering welcomed 
> <users@clusterlabs.org>
> Subject: Re: [ClusterLabs] Mysql slave did not start replication after 
> failure, and read-only IP also remained active on the much outdated slave
> 
> On Mon, Aug 22, 2016 at 12:18 PM, Attila Megyeri
> <amegy...@minerva-soft.com> wrote:
>> Dear community,
>>
>>
>>
>> A few days ago we had an issue in our Mysql M/S replication cluster.
>>
>> We have a one R/W Master, and a one RO Slave setup. RO VIP is supposed to be
>> running on the slave if it is not too much behind the master, and if any
>> error occurs, RO VIP is moved to the master.
>>
>>
>>
>> Something happened with the slave Mysql (some disk issue, still
>> investigating), but the problem is, that the slave VIP remained on the slave
>> device, even though the slave process was not running, and the server was
>> much outdated.
>>
>>
>>
>> During the issue the following log entries appeared (just an extract as it
>> would be too long):
>>
>>
>>
>>
>>
>> Aug 20 02:04:07 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
>> not scheduled for 14088.5488 ms (threshold is 4000. ms). Consider token
>> timeout increase.
>>
>> Aug 20 02:04:07 ctdb1 corosync[1056]:   [TOTEM ] A processor failed, forming
>> new configuration.
>>
>> Aug 20 02:04:34 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
>> not scheduled for 27065.2559 ms (threshold is 4000. ms). Consider token
>> timeout increase.
>>
>> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6720)
>> was formed. Members left: 168362243 168362281 168362282 168362301 168362302
>> 168362311 168362312 1
>>
>> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6724)
>> was formed. Members
>>
>> ..
>>
>> Aug 20 02:13:28 ctdb1 corosync[1056]:   [MAIN  ] Completed service
>> synchronization, ready to provide service.
>>
>> ..
>>
>> Aug 20 02:13:29 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending
>> flush op to all hosts for: readable (1)
>>
>> …
>>
>> Aug 20 02:13:32 ctdb1 mysql(db-mysql)[10492]: INFO: post-demote notification
>> for ctdb1
>>
>> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-master)[10490]: INFO: IP status = ok,
>> IP_CIP=
>>
>> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
>> db-ip-master_stop_0 (call=371, rc=0, cib-update=179, confirmed=true) ok
>>
>> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Adding inet address
>> xxx/24 with broadcast address  to device eth0
>>
>> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Bringing device
>> eth0 up
>>
>> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO:
>> /usr/lib/heartbeat/send_arp -i 200 -r 5 -p
>> /usr/var/run/resource-agents/send_arp-xxx eth0 xxx auto not_used not_used
>>
>> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: proce

Re: [ClusterLabs] Mysql slave did not start replication after failure, and read-only IP also remained active on the much outdated slave

2016-08-22 Thread Attila Megyeri
Hi Andrei,

I waited several hours, and nothing happened. 

I assume that the RA does not treat this case properly. Mysql was running, but 
the "show slave status" command returned something that the RA was not prepared 
to parse, and instead of reporting a non-readable attribute, it returned some 
generic error, that did not stop the server. 

Rgds,
Attila


-Original Message-
From: Andrei Borzenkov [mailto:arvidj...@gmail.com] 
Sent: Monday, August 22, 2016 11:42 AM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Subject: Re: [ClusterLabs] Mysql slave did not start replication after failure, 
and read-only IP also remained active on the much outdated slave

On Mon, Aug 22, 2016 at 12:18 PM, Attila Megyeri
<amegy...@minerva-soft.com> wrote:
> Dear community,
>
>
>
> A few days ago we had an issue in our Mysql M/S replication cluster.
>
> We have a one R/W Master, and a one RO Slave setup. RO VIP is supposed to be
> running on the slave if it is not too much behind the master, and if any
> error occurs, RO VIP is moved to the master.
>
>
>
> Something happened with the slave Mysql (some disk issue, still
> investigating), but the problem is, that the slave VIP remained on the slave
> device, even though the slave process was not running, and the server was
> much outdated.
>
>
>
> During the issue the following log entries appeared (just an extract as it
> would be too long):
>
>
>
>
>
> Aug 20 02:04:07 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
> not scheduled for 14088.5488 ms (threshold is 4000. ms). Consider token
> timeout increase.
>
> Aug 20 02:04:07 ctdb1 corosync[1056]:   [TOTEM ] A processor failed, forming
> new configuration.
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
> not scheduled for 27065.2559 ms (threshold is 4000. ms). Consider token
> timeout increase.
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6720)
> was formed. Members left: 168362243 168362281 168362282 168362301 168362302
> 168362311 168362312 1
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6724)
> was formed. Members
>
> ..
>
> Aug 20 02:13:28 ctdb1 corosync[1056]:   [MAIN  ] Completed service
> synchronization, ready to provide service.
>
> ..
>
> Aug 20 02:13:29 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending
> flush op to all hosts for: readable (1)
>
> …
>
> Aug 20 02:13:32 ctdb1 mysql(db-mysql)[10492]: INFO: post-demote notification
> for ctdb1
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-master)[10490]: INFO: IP status = ok,
> IP_CIP=
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-master_stop_0 (call=371, rc=0, cib-update=179, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Adding inet address
> xxx/24 with broadcast address  to device eth0
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Bringing device
> eth0 up
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO:
> /usr/lib/heartbeat/send_arp -i 200 -r 5 -p
> /usr/var/run/resource-agents/send_arp-xxx eth0 xxx auto not_used not_used
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-slave_start_0 (call=377, rc=0, cib-update=180, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-slave_monitor_2 (call=380, rc=0, cib-update=181, confirmed=false)
> ok
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_notify_0 (call=374, rc=0, cib-update=0, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending
> flush op to all hosts for: master-db-mysql (1)
>
> Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_perform_update: Sent
> update 1622: master-db-mysql=1
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_demote_0 (call=384, rc=0, cib-update=182, confirmed=true) ok
>
> Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11160]: INFO: Ignoring post-demote
> notification for my own demotion.
>
> Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_notify_0 (call=387, rc=0, cib-update=0, confirmed=true) ok
>
> Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11185]: ERROR: check_slave invoked on
> an instance that is not a replication slave.
>
> Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_monitor_7000 (call=390, rc=0, cib-update=183, confirmed=false) ok
>
> Aug 20 02:13:33 ctdb1 ntpd[1560]: Lis

Re: [ClusterLabs] Mysql slave did not start replication after failure, and read-only IP also remained active on the much outdated slave

2016-08-22 Thread Andrei Borzenkov
On Mon, Aug 22, 2016 at 12:18 PM, Attila Megyeri
 wrote:
> Dear community,
>
>
>
> A few days ago we had an issue in our Mysql M/S replication cluster.
>
> We have a one R/W Master, and a one RO Slave setup. RO VIP is supposed to be
> running on the slave if it is not too much behind the master, and if any
> error occurs, RO VIP is moved to the master.
>
>
>
> Something happened with the slave Mysql (some disk issue, still
> investigating), but the problem is, that the slave VIP remained on the slave
> device, even though the slave process was not running, and the server was
> much outdated.
>
>
>
> During the issue the following log entries appeared (just an extract as it
> would be too long):
>
>
>
>
>
> Aug 20 02:04:07 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
> not scheduled for 14088.5488 ms (threshold is 4000. ms). Consider token
> timeout increase.
>
> Aug 20 02:04:07 ctdb1 corosync[1056]:   [TOTEM ] A processor failed, forming
> new configuration.
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was
> not scheduled for 27065.2559 ms (threshold is 4000. ms). Consider token
> timeout increase.
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6720)
> was formed. Members left: 168362243 168362281 168362282 168362301 168362302
> 168362311 168362312 1
>
> Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6724)
> was formed. Members
>
> ..
>
> Aug 20 02:13:28 ctdb1 corosync[1056]:   [MAIN  ] Completed service
> synchronization, ready to provide service.
>
> ..
>
> Aug 20 02:13:29 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending
> flush op to all hosts for: readable (1)
>
> …
>
> Aug 20 02:13:32 ctdb1 mysql(db-mysql)[10492]: INFO: post-demote notification
> for ctdb1
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-master)[10490]: INFO: IP status = ok,
> IP_CIP=
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-master_stop_0 (call=371, rc=0, cib-update=179, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Adding inet address
> xxx/24 with broadcast address  to device eth0
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Bringing device
> eth0 up
>
> Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO:
> /usr/lib/heartbeat/send_arp -i 200 -r 5 -p
> /usr/var/run/resource-agents/send_arp-xxx eth0 xxx auto not_used not_used
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-slave_start_0 (call=377, rc=0, cib-update=180, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-ip-slave_monitor_2 (call=380, rc=0, cib-update=181, confirmed=false)
> ok
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_notify_0 (call=374, rc=0, cib-update=0, confirmed=true) ok
>
> Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending
> flush op to all hosts for: master-db-mysql (1)
>
> Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_perform_update: Sent
> update 1622: master-db-mysql=1
>
> Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_demote_0 (call=384, rc=0, cib-update=182, confirmed=true) ok
>
> Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11160]: INFO: Ignoring post-demote
> notification for my own demotion.
>
> Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_notify_0 (call=387, rc=0, cib-update=0, confirmed=true) ok
>
> Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11185]: ERROR: check_slave invoked on
> an instance that is not a replication slave.
>
> Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation
> db-mysql_monitor_7000 (call=390, rc=0, cib-update=183, confirmed=false) ok
>
> Aug 20 02:13:33 ctdb1 ntpd[1560]: Listen normally on 16 eth0 . UDP 123
>
> Aug 20 02:13:33 ctdb1 ntpd[1560]: Deleting interface #12 eth0, xxx#123,
> interface stats: received=0, sent=0, dropped=0, active_time=2637334 secs
>
> Aug 20 02:13:33 ctdb1 ntpd[1560]: peers refreshed
>
> Aug 20 02:13:33 ctdb1 ntpd[1560]: new interface(s) found: waking up resolver
>
> Aug 20 02:13:40 ctdb1 mysql(db-mysql)[11224]: ERROR: check_slave invoked on
> an instance that is not a replication slave.
>
> Aug 20 02:13:47 ctdb1 mysql(db-mysql)[11263]: ERROR: check_slave invoked on
> an instance that is not a replication slave.
>
>
>
> And from this time, the last two lines repeat every 7 seconds (mysql
> monitoring interval)
>
>
>
>
>
> The expected behavior was that the slave (RO) VIP should have been moved to
> the master, as the secondary db was outdated.
>
> Unfortunately I cannot recall what crm_mon was showing when the issue was
> present, but I am sure that the RA did not handle the situation properly.
>
>
>
> Placing the slave node into standby and the online resolved the issue
> immediately (Slave 

[ClusterLabs] Mysql slave did not start replication after failure, and read-only IP also remained active on the much outdated slave

2016-08-22 Thread Attila Megyeri
Dear community,

A few days ago we had an issue in our Mysql M/S replication cluster.
We have a one R/W Master, and a one RO Slave setup. RO VIP is supposed to be 
running on the slave if it is not too much behind the master, and if any error 
occurs, RO VIP is moved to the master.

Something happened with the slave Mysql (some disk issue, still investigating), 
but the problem is, that the slave VIP remained on the slave device, even 
though the slave process was not running, and the server was much outdated.

During the issue the following log entries appeared (just an extract as it 
would be too long):


Aug 20 02:04:07 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was not 
scheduled for 14088.5488 ms (threshold is 4000. ms). Consider token timeout 
increase.
Aug 20 02:04:07 ctdb1 corosync[1056]:   [TOTEM ] A processor failed, forming 
new configuration.
Aug 20 02:04:34 ctdb1 corosync[1056]:   [MAIN  ] Corosync main process was not 
scheduled for 27065.2559 ms (threshold is 4000. ms). Consider token timeout 
increase.
Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6720) 
was formed. Members left: 168362243 168362281 168362282 168362301 168362302 
168362311 168362312 1
Aug 20 02:04:34 ctdb1 corosync[1056]:   [TOTEM ] A new membership (xxx:6724) 
was formed. Members
..
Aug 20 02:13:28 ctdb1 corosync[1056]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
..
Aug 20 02:13:29 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending 
flush op to all hosts for: readable (1)
...
Aug 20 02:13:32 ctdb1 mysql(db-mysql)[10492]: INFO: post-demote notification 
for ctdb1
Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-master)[10490]: INFO: IP status = ok, 
IP_CIP=
Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-ip-master_stop_0 (call=371, rc=0, cib-update=179, confirmed=true) ok
Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Adding inet address 
xxx/24 with broadcast address  to device eth0
Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: Bringing device eth0 up
Aug 20 02:13:32 ctdb1 IPaddr2(db-ip-slave)[10620]: INFO: 
/usr/lib/heartbeat/send_arp -i 200 -r 5 -p 
/usr/var/run/resource-agents/send_arp-xxx eth0 xxx auto not_used not_used
Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-ip-slave_start_0 (call=377, rc=0, cib-update=180, confirmed=true) ok
Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-ip-slave_monitor_2 (call=380, rc=0, cib-update=181, confirmed=false) ok
Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-mysql_notify_0 (call=374, rc=0, cib-update=0, confirmed=true) ok
Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_trigger_update: Sending 
flush op to all hosts for: master-db-mysql (1)
Aug 20 02:13:32 ctdb1 attrd[1584]:   notice: attrd_perform_update: Sent update 
1622: master-db-mysql=1
Aug 20 02:13:32 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-mysql_demote_0 (call=384, rc=0, cib-update=182, confirmed=true) ok
Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11160]: INFO: Ignoring post-demote 
notification for my own demotion.
Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-mysql_notify_0 (call=387, rc=0, cib-update=0, confirmed=true) ok
Aug 20 02:13:33 ctdb1 mysql(db-mysql)[11185]: ERROR: check_slave invoked on an 
instance that is not a replication slave.
Aug 20 02:13:33 ctdb1 crmd[1586]:   notice: process_lrm_event: LRM operation 
db-mysql_monitor_7000 (call=390, rc=0, cib-update=183, confirmed=false) ok
Aug 20 02:13:33 ctdb1 ntpd[1560]: Listen normally on 16 eth0 . UDP 123
Aug 20 02:13:33 ctdb1 ntpd[1560]: Deleting interface #12 eth0, xxx#123, 
interface stats: received=0, sent=0, dropped=0, active_time=2637334 secs
Aug 20 02:13:33 ctdb1 ntpd[1560]: peers refreshed
Aug 20 02:13:33 ctdb1 ntpd[1560]: new interface(s) found: waking up resolver
Aug 20 02:13:40 ctdb1 mysql(db-mysql)[11224]: ERROR: check_slave invoked on an 
instance that is not a replication slave.
Aug 20 02:13:47 ctdb1 mysql(db-mysql)[11263]: ERROR: check_slave invoked on an 
instance that is not a replication slave.

And from this time, the last two lines repeat every 7 seconds (mysql monitoring 
interval)


The expected behavior was that the slave (RO) VIP should have been moved to the 
master, as the secondary db was outdated.
Unfortunately I cannot recall what crm_mon was showing when the issue was 
present, but I am sure that the RA did not handle the situation properly.

Placing the slave node into standby and the online resolved the issue 
immediately (Slave started to sync, and in  a few minutes it catched up the 
master).


Here is the relevant config from the configuration:


primitive db-ip-master ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="XXX" cidr_netmask="24" 
broadcast="XXX" \
op start interval="0" timeout="20s"