Hi, I am trying to use heartbeat to manage a multi-master MySQL configuration that currently consists of 2 nodes. Both nodes should be functionally equivalent, and MySQL should be kept running on all nodes as much as possible to keep changes synchronized.
To do this, I have created a clone resource to keep MySQL running on multiple machines. I then added an IP resource which is colocated with the MySQL clone resource. This works as I would expect - when heartbeat starts, both of the MySQL nodes start. if a MySQL resource on a node is made to fail, the IP resource moves to the other node which has a running MySQL instance. After this has occurred, I would like to restart the failed MySQL server while leaving the newly active server untouched. However, as far as I can tell, there is no way via either the GUI or crm_resource to force a single instance of a clone back online after a failure. The only way I have found to restore the original node is to do a "resource cleanup," which does work, but causes *both* mysql clones to restart. This bounces the daemon on the active node, which interrupts service. So, I have a few questions: 1) Is it possible to manually restart a single instance of a clone resource? 2) Is there a way to have the "cleanup" bring the failed node back but *not* restart the active node? 3) Is there a better way to configure heartbeat to deal with multi-master MySQL? I noticed another poster was running MySQL outside of heartbeat and using heartbeat to monitor it - is that a preferred solution? Thanks for any help. Jeremy _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
