I’ve configured 6 nodes as mysql master/slave using this config:

primitive p_mysql ocf:heartbeat:mysql \
        params socket="/var/run/mysqld/mysqld.sock" replication_user="slave" 
replication_passwd=“XXXXX" test_user="test_user" test_passwd="test_pass" \
        op start interval="0" timeout="120s" \
        op stop interval="0" timeout="120s" \
        op monitor timeout="30s" interval="30s" role="Master" 
OCF_CHECK_LEVEL="10" \
        op monitor timeout="30s" interval="60s" role="Slave" 
OCF_CHECK_LEVEL="10"
primitive p_mysql-ip ocf:heartbeat:IPaddr \
        params ip="10.10.10.191" \
        op monitor interval="1s" timeout="20s" \
        op start interval="0" timeout="20s" \
        op stop interval="0" timeout="20s" \
        meta is-managed="true" resource-stickiness="500"
ms cl_mysql p_mysql
colocation co_ip-on-mysql inf: p_mysql-ip cl_mysql:Master

On the initial setup, everything looks good. The slaves are all reporting 
proper status. However, if I reboot one of the slaves, even though it is 
reported in crm status as a slave, the mysql server shows that slave status is 
not configured or started on that node and log shows:

Aug 23 08:44:35 [1204] app5       lrmd:     info: log_execute:  executing - 
rsc:p_mysql action:start call_id:99
mysql(p_mysql)[1562]:   2015/08/23_08:44:35 INFO: MySQL is not running
mysql(p_mysql)[1562]:   2015/08/23_08:44:35 INFO: Creating PID dir: 
/var/run/mysqld
mysql(p_mysql)[1562]:   2015/08/23_08:44:35 INFO: MySQL is not running
mysql(p_mysql)[1562]:   2015/08/23_08:44:37 INFO: MySQL is not running
mysql(p_mysql)[1562]:   2015/08/23_08:44:41 INFO: No MySQL master present - 
clearing replication state
mysql(p_mysql)[1562]:   2015/08/23_08:44:41 ERROR: check_slave invoked on an 
instance that is not a replication slave.
mysql(p_mysql)[1562]:   2015/08/23_08:44:41 WARNING: Attempted to unset the 
replication master on an instance that is not configured as a replication slave
Aug 23 08:44:42 [1205] app5      attrd:   notice: attrd_trigger_update:         
Sending flush op to all hosts for: master-p_mysql (1)
Aug 23 08:44:42 [1202] app5        cib:     info: cib_process_request:  
Completed cib_query operation for section 
//cib/status//node_state[@id='168430230']//transient_attributes//nvpair[@name='master-p_mysql']:
 No such device or address (rc=-6, origin=local/attrd/26, version=0.554.409)
Aug 23 08:44:42 [1205] app5      attrd:   notice: attrd_perform_update:         
Sent update 28: master-p_mysql=1
mysql(p_mysql)[1562]:   2015/08/23_08:44:42 ERROR: check_slave invoked on an 
instance that is not a replication slave.
mysql(p_mysql)[1562]:   2015/08/23_08:44:42 INFO: MySQL started
Aug 23 08:44:42 [1204] app5       lrmd:   notice: operation_finished:   
p_mysql_start_0:1562:stderr [ Error performing operation: No such device or 
address ]
Aug 23 08:44:42 [1204] app5       lrmd:     info: log_finished:         
finished - rsc:p_mysql action:start call_id:99 pid:1562 exit-code:0 
exec-time:6509ms queue-time:0ms
Aug 23 08:44:42 [1207] app5       crmd:   notice: process_lrm_event:    LRM 
operation p_mysql_start_0 (call=99, rc=0, cib-update=28, confirmed=true) ok
Aug 23 08:44:42 [1207] app5       crmd:     info: do_lrm_rsc_op:        
Performing key=64:347:0:429cc780-9827-42f4-b638-7d4ba43a1c6c 
op=p_mysql_monitor_60000
mysql(p_mysql)[2399]:   2015/08/23_08:44:42 ERROR: check_slave invoked on an 
instance that is not a replication slave.
Aug 23 08:44:42 [1207] app5       crmd:   notice: process_lrm_event:    LRM 
operation p_mysql_monitor_60000 (call=106, rc=0, cib-update=29, 
confirmed=false) ok
mysql(p_mysql)[2762]:   2015/08/23_08:45:42 ERROR: check_slave invoked on an 
instance that is not a replication slave.

From here, the last message is reported at every monitor interval. I’m unsure 
why these happens, and could use some help. Thanks.

—
Cyphre    : http://www.cyphre.com/
SwissDisk : http://www.swissdisk.com/
Ubuntu    : http://www.ubuntu.com/
My Blog   : http://ben-collins.blogspot.com/

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Linux-HA mailing list is closing down.
Please subscribe to us...@clusterlabs.org instead.
http://clusterlabs.org/mailman/listinfo/users
_______________________________________________
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha

Reply via email to