On Fri, Dec 14, 2012 at 7:53 PM, Emmanuel Saint-Joanis <[email protected]> wrote: >> Assuming the second node does not run pacemaker .... switch the cluster >> into maintenance-mode, adjust your crm configuration to reflect your >> changes to the VMs and restart pacemaker. >> >> Once you made sure all is in place and all vm configurations are >> successfully probed disable maintenance-mode and start pacemaker on the >> second node. > > > I thank you Andreas for those enlightenments, and I'd like to submit my > case which is quite related : > I bumped sometimes in cases where the cluster was stuck, resources not > willing to start, so I did : > crm configure save /tmp/backup > stop pacemaker & corosync > rm /var/lib/heartbeat/*/* /var/lib/pengine/* > start corosync & pacemaker > crm configure load update /tmp/backup
Thats basically the same as: stop corosync/pacemaker start corosync/pacemaker The resource state is discarded when the cluster stops. But do please create a bugreport the next time this happens, if pacemaker is misbehaving we need to know about it so we can fix it. > > Those actions seemed to fully unlock all the nodes starting up happily - > but I have some dislike for this ultimate weapon. > > Then, do you know a way to ->reset<- all states ? I mean what we see by > doing cibadmin -Q, which represents current states. cibadmin --erase or crm_resource --reprobe Check the man pages for what they do. > By the way, is there some other kind-of memory of cluster-history, such as > current scores of resources, for example ? The policy engine files can be replayed to show this. _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
