In preparation for a MySQL cluster, I have been developing a pacemaker
configuration based on:
http://www.drbd.org/users-guide/s-pacemaker-crm-drbd-backed-service.html. This
is on SLES11 SP1, with the SLES11 SP1 HAE package installed. The nodes
involved are two xen images named "sles11-2" and sles11-3".
I am confused by the output from crm_mon. Note this is my first HA cluster so
its not clear to me what I am looking at yet...
First the current configuration:
sles11-2:~ # crm configure show
node sles11-2
node sles11-3
primitive drbd_r0 ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="15" start="240s"
primitive drbd_r1 ocf:linbit:drbd \
params drbd_resource="r1" \
op monitor interval="15" start="240s"
primitive res_fs_r0 ocf:heartbeat:Filesystem \
params options="rw,noatime" device="/dev/drbd0"
directory="/var/lib/msyql" fstype="ext3"
primitive res_fs_r1 ocf:heartbeat:Filesystem \
params options="rw,noatime" device="/dev/drbd1" directory="/var/opt"
fstype="ext3"
primitive stone-resource stonith:external/xen0 \
params hostlist="sles11-2:/etc/xen/vm/sles11-2
sles11-3:/etc/xen/vm/sles11-3" dom0="silverton"
ms ms_drbd_r0 drbd_r0 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
ms ms_drbd_r1 drbd_r1 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
colocation fs_on_drbd_r0 inf: res_fs_r0 ms_drbd_r0:Master
colocation fs_on_drbd_r1 inf: res_fs_r1 ms_drbd_r1:Master
order fs_after_drbd_r0 inf: ms_drbd_r0:promote res_fs_r0:start
order fs_after_drbd_r1 inf: ms_drbd_r1:promote res_fs_r1:start
property $id="cib-bootstrap-options" \
dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="true"
Note I have two DRBD-backed partitions in this configuration. Eventually I
will group all the primitives together, for right now I'm trying to
understand/believe that I have a grip on configuring....
Now the output from crm_mon:
sles11-2:~ # crm_mon -1
============
Last updated: Fri Aug 13 11:15:37 2010
Stack: openais
Current DC: sles11-2 - partition with quorum
Version: 1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5
2 Nodes configured, 2 expected votes
5 Resources configured.
============
Online: [ sles11-2 sles11-3 ]
stone-resource (stonith:external/xen0): Started sles11-2
Master/Slave Set: ms_drbd_r0
Masters: [ sles11-2 ]
Slaves: [ sles11-3 ]
Master/Slave Set: ms_drbd_r1
Masters: [ sles11-2 ]
Slaves: [ sles11-3 ]
res_fs_r1 (ocf::heartbeat:Filesystem): Started sles11-2
Failed actions:
drbd_r0_monitor_0 (node=sles11-2, call=9, rc=6, status=complete): not
configured
drbd_r1_monitor_0 (node=sles11-2, call=10, rc=6, status=complete): not
configured
res_fs_r0_start_0 (node=sles11-2, call=15, rc=5, status=complete): not
installed
drbd_r0_monitor_0 (node=sles11-3, call=26, rc=6, status=complete): not
configured
drbd_r1_monitor_0 (node=sles11-3, call=27, rc=6, status=complete): not
configured
res_fs_r0_start_0 (node=sles11-3, call=18, rc=5, status=complete): not
installed
res_fs_r1_start_0 (node=sles11-3, call=35, rc=1, status=complete): unknown
error
I am confused by the "Failed Actions" section of the output. Why have these
actions failed? Even after several minutes, the state has not changed.
Note also that rcdrbd status on sles11-2 returns:
sles11-2:~ # rcdrbd status
drbd driver loaded OK; device status:
version: 8.3.7 (api:88/proto:86-91)
GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by p...@fat-tyre,
2010-01-13 17:17:27
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C
1:r1 Connected Primary/Secondary UpToDate/UpToDate C /var/opt ext3
Why isn't /var/lib/mysql mounted, like /var/opt is?
THanks for any advice.
Best
-PWM
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems