Hi!

When initializing a new MySQL master-slave replication with pacemaker,
I am running into the following problem:

> ERROR: check_slave invoked on an instance that is not a replication slave.

This happens upon initializing the master slave resource:

1. create a two node cluster
2. set node2 to standby
3. create the mysql resource:
> primitive wdb-mysql ocf:ipax:mysql \
>         op monitor interval="30" timeout="30" \
>         op monitor interval="300" timeout="30" OCF_CHECK_LEVEL="10" \
>         op monitor interval="301" role="Master" timeout="30" 
> OCF_CHECK_LEVEL="10" \
>         op monitor interval="31" role="Slave" timeout="30" 
> OCF_CHECK_LEVEL="10" \
>         op monitor interval="15" role="Slave" timeout="30" \
>         op monitor interval="10" role="Master" timeout="30" \
>         op start interval="0" timeout="120" \
>         op stop interval="0" timeout="120" \
>         params config="/etc/mysql/my.cnf" datadir="/data/db/mysql/data/" 
> socket="/var/run/mysqld/mysqld.sock" binary="/usr/sbin/mysqld" 
> additional_parameters="--basedir=/usr --skip-external-locking 
> --log-bin=/data/db/mysql/log/mysql-bin.log 
> --relay-log=/data/db/mysql/log/mysql-relay-bin.log" 
> pid="/var/run/mysqld/mysqld.pid" test_table="nagiostest.test_table" 
> test_user="nagios" test_passwd="xxxxxxxxxx" replication_user="mysql_rep" 
> replication_passwd="xxxxxxxxxxxx"

> ms ms-wdb-mysql wdb-mysql \
>         meta target-role="Started" notify="true"

3. initialize the replication (node1 is master):
   mirror the current db from primary to secondary via flush with read
   lock, copy database files, unlock.
4. issue crm node online node2

Then, the following happens:
> Dec 15 16:35:20 wdb02 mysql[19979]: INFO: Changing MySQL configuration to 
> replicate from wdb01.
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: RA output: 
> (wdb-mysql:1:start:stderr) Error performing operation: The object/attribute 
> does not exist
> Dec 15 16:35:20 wdb02 mysql[19979]: ERROR: check_slave invoked on an instance 
> that is not a replication slave.
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: RA output: 
> (wdb-mysql:1:start:stderr) Error performing operation: The object/attribute 
> does not exist
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: RA output: 
> (wdb-mysql:1:start:stderr) Error performing operation: The object/attribute 
> does not exist
> Dec 15 16:35:20 wdb02 mysql[19979]: ERROR: ERROR 1201 (HY000) at line 1: 
> Could not initialize master info structure; more error messages can be found 
> in the MySQL error log
> Dec 15 16:35:20 wdb02 mysql[19979]: ERROR: ERROR 1201 (HY000) at line 1: 
> Could not initialize master info structure; more error messages can be found 
> in the MySQL error log
> Dec 15 16:35:20 wdb02 mysql[19979]: ERROR: Failed to start slave
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: operation start[23] on wdb-mysql:1 
> for client 6024: pid 19979 exited with return code 1
> Dec 15 16:35:20 wdb02 crmd: [6024]: info: process_lrm_event: LRM operation 
> wdb-mysql:1_start_0 (call=23, rc=1, cib-update=31, confirmed=true) unknown 
> error
> Dec 15 16:35:20 wdb02 attrd: [6022]: notice: attrd_trigger_update: Sending 
> flush op to all hosts for: fail-count-wdb-mysql:1 (INFINITY)
> Dec 15 16:35:20 wdb02 attrd: [6022]: notice: attrd_perform_update: Sent 
> update 39: fail-count-wdb-mysql:1=INFINITY
> Dec 15 16:35:20 wdb02 attrd: [6022]: notice: attrd_trigger_update: Sending 
> flush op to all hosts for: last-failure-wdb-mysql:1 (1355585720)
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: rsc:wdb-mysql:1 notify[25] (pid 
> 21094)
> Dec 15 16:35:20 wdb02 attrd: [6022]: notice: attrd_perform_update: Sent 
> update 42: last-failure-wdb-mysql:1=1355585720
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: operation notify[25] on wdb-mysql:1 
> for client 6024: pid 21094 exited with return code 0
> Dec 15 16:35:20 wdb02 crmd: [6024]: info: process_lrm_event: LRM operation 
> wdb-mysql:1_notify_0 (call=25, rc=0, cib-update=0, confirmed=true) ok
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: rsc:wdb-mysql:1 notify[26] (pid 
> 21112)
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: operation notify[26] on wdb-mysql:1 
> for client 6024: pid 21112 exited with return code 0
> Dec 15 16:35:20 wdb02 crmd: [6024]: info: process_lrm_event: LRM operation 
> wdb-mysql:1_notify_0 (call=26, rc=0, cib-update=0, confirmed=true) ok
> Dec 15 16:35:20 wdb02 lrmd: [6021]: info: rsc:wdb-mysql:1 stop[27] (pid 21130)

The first ERROR comes from around line 447 inside get_slave_info():
>             # Instance produced an empty "SHOW SLAVE STATUS" output --
>             # instance is not a slave
>             ocf_log err "check_slave invoked on an instance that is not a 
> replication slave."
>             return $OCF_ERR_GENERIC
>         fi

Actually, this is expected as the slave has never been started yet.

Is there a possibility to fix this via the resource agent, or do i
manually need to start the replication first and involve pacemaker
after that only?

Cheers,
Raoul
-- 
____________________________________________________________________
DI (FH) Raoul Bhatia M.Sc.          email.          [email protected]
Technischer Leiter

IPAX - Aloy Bhatia Hava OG          web.          http://www.ipax.at
Barawitzkagasse 10/2/2/11           email.            [email protected]
1190 Wien                           tel.               +43 1 3670030
FN 277995t HG Wien                  fax.            +43 1 3670030 15
____________________________________________________________________
_______________________________________________________
Linux-HA-Dev: [email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to