On 08/29/2013 06:55 PM, Ed Santora wrote:
I wasn't able to determine what the problem was. I was able to get the
same behavior on a fresh install of 2 systems.

After I downgraded to drbd-utils84-8.4.2 and kmod-drbd84-8.4.2,
pacemaker was able to promote the drbd resource on one node. I was able
to move the resources and standby a node with pacemaker doing the proper
thing.

It seems something with the drbd84-8.4.3 rpms no longer likes pacemaker
1.1.8.


FWIW I have not seen your problem with my setup of 2 VMs running EL6.4, drbd84-8.4.3, pacemaker 1.1.10 and resource-agents-3.9.5 both from github. I am new to drbd and pacemaker so may very well not yet have bumped into what you are experiencing. There is drbd 8.4.4rc1 which might also be worth a try.

[root@z02 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by mockbuild@Build64R6, 2013-07-06 12:36:38 m:res cs ro ds p mounted fstype 0:mysql Connected Primary/Secondary UpToDate/UpToDate C /var/lib/mysql ext4

[root@z02 ~]# pcs status
Last updated: Thu Aug 29 19:33:02 2013
Last change: Thu Aug 29 19:03:29 2013 via crm_resource on z02
Stack: cman
Current DC: z01 - partition with quorum
Version: 1.1.10-4.el6
2 Nodes configured
5 Resources configured


Online: [ z01 z02 ]

Full list of resources:

 Resource Group: mysql
     mysql_fs   (ocf::heartbeat:Filesystem):    Started z02
     mysql_ip   (ocf::heartbeat:IPaddr2):       Started z02
     mysqld     (lsb:mysqld):   Started z02
 Master/Slave Set: ms_drbd_mysql [drbd_mysql]
     Masters: [ z02 ]
     Slaves: [ z01 ]

Regards,
Patrick
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to