[Linux-HA] Heartbeat v1 and stonith/stonith_host ipmilan
Hi folks! I have a 2-node setup for MySQL HA. The two nodes access a shared set of SAS disks over a backplane. The services/resources are: an IP address, an LVM VG, two mountpoints and MySQL. Using heartbeat 3.0.4 with a v1 configuration this is a piece of cake. Except apparently for stonith. The documentation hints that stonith and stonith_node should be available when using v1 configurations. And a grep through current sources seems to indicate it's still supported. But the 'stonith' script/binary and the scripts that the old documentation indicates aren't there anymore (when I install on RHEL6.4). I do have fenced (from cman) and fence_ipmilan (from fence-agents) which looks like it should work for these boxes. Can heartbeat 3.x, using a v1 config, drive fence_ipmilan or some equivalent node? Any hints how, or how to explore/discover/debug? Please CC me. thanks! m ps: yes, I know about more modern facilities. trying really hard to keep this simple... -- martin.langh...@gmail.com - ask interesting questions - don't get distracted with shiny stuff - working code first ~ http://docs.moodle.org/en/User:Martin_Langhoff ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Heartbeat v1 and stonith/stonith_host ipmilan
On Wed, Jul 17, 2013 at 6:03 PM, Martin Langhoff martin.langh...@gmail.com wrote: But the 'stonith' script/binary and the scripts that the old documentation indicates aren't there anymore (when I install on RHEL6.4). Configuring stonith_host external foo bar baz led me in the right direction. heartbeat knows what to do, but on RHEL/CentOS/SL 6.x cluster-glue no longer includes stonith agents. Some info at http://www.gossamer-threads.com/lists/linuxha/pacemaker/74487 So I rebuilt the RPMs for cluster-glue reversing that removal. It is a dicey proposition, of course, to setup a cluster that I expect to be long-lived based on software that folks are running to deprecate. But I have played with corosync + pacemaker extensively, and TBH they are way overkill for a simple setup. Is there a _simple_ setup guide for a two node cluster? Y'know, LVM, couple mountpoints, one server daemon (mysql)? I am not afraid of complexity; but I like to pick where to invest in complexity :-) cheers, m -- martin.langh...@gmail.com - ask interesting questions - don't get distracted with shiny stuff - working code first ~ http://docs.moodle.org/en/User:Martin_Langhoff ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Heartbeat v1 and stonith/stonith_host ipmilan
On 17/07/13 20:43, Martin Langhoff wrote: On Wed, Jul 17, 2013 at 6:03 PM, Martin Langhoff martin.langh...@gmail.com wrote: But the 'stonith' script/binary and the scripts that the old documentation indicates aren't there anymore (when I install on RHEL6.4). Configuring stonith_host external foo bar baz led me in the right direction. heartbeat knows what to do, but on RHEL/CentOS/SL 6.x cluster-glue no longer includes stonith agents. Some info at http://www.gossamer-threads.com/lists/linuxha/pacemaker/74487 So I rebuilt the RPMs for cluster-glue reversing that removal. It is a dicey proposition, of course, to setup a cluster that I expect to be long-lived based on software that folks are running to deprecate. But I have played with corosync + pacemaker extensively, and TBH they are way overkill for a simple setup. Is there a _simple_ setup guide for a two node cluster? Y'know, LVM, couple mountpoints, one server daemon (mysql)? I am not afraid of complexity; but I like to pick where to invest in complexity :-) cheers, The easiest, native way under RHEL/CentOS is to use corosync + cman + rgmanager. The configuration you are describing will be simple and will be properly supported until 2020 (at least), and not need hacks. If you're interested in this approach, I can help. Here or on #linux-cluster on freenode's IRC. digimer -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Heartbeat v1 and stonith/stonith_host ipmilan
On Wed, Jul 17, 2013 at 9:34 PM, Digimer li...@alteeve.ca wrote: The easiest, native way under RHEL/CentOS is to use corosync + cman + rgmanager. The configuration you are describing will be simple and will be properly supported until 2020 (at least), and not need hacks. If you're interested in this approach, I can help. Here or on #linux-cluster on freenode's IRC. Thanks for the offer to help. Is there any clear setup guide you can point me to? My TZ is EDT, so midnight (bedtime!) now. I won't be awake and on email/irc until tomorrow morning. m -- martin.langh...@gmail.com - ask interesting questions - don't get distracted with shiny stuff - working code first ~ http://docs.moodle.org/en/User:Martin_Langhoff ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Heartbeat v1 and stonith/stonith_host ipmilan
On 18/07/13 00:12, Martin Langhoff wrote: On Wed, Jul 17, 2013 at 9:34 PM, Digimer li...@alteeve.ca wrote: The easiest, native way under RHEL/CentOS is to use corosync + cman + rgmanager. The configuration you are describing will be simple and will be properly supported until 2020 (at least), and not need hacks. If you're interested in this approach, I can help. Here or on #linux-cluster on freenode's IRC. Thanks for the offer to help. Is there any clear setup guide you can point me to? My TZ is EDT, so midnight (bedtime!) now. I won't be awake and on email/irc until tomorrow morning. Heh, same timezone, but I'm more of a night owl. :) I have a tutorial that was written for people who want to host highly-available VMs on a two-node red hat cluster. It goes into a lot of detail that you may not be interested in, but I think it's pretty comprehensive (I tried to assume no prior knowledge of HA). So perhaps you can tease out the parts you're interested in. https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial You're configuration would need basically; * Node definitions with fence methods defined * Resource section covering your storage and daemon * failover domain to control which node is primary for a given service and which is the backup The tutorial covers clustered LVM and uses the GFS2 clustered file system. So it anticipates a somewhat complex setup. If you are looking for simple failover, you can skip all of that. You could even dump LVM all together, if your goal is to simply support MySQL's data storage. So the config, in this case, you be; * The cluster name is foo * This is a two node cluster (disable quorum) ** Node 1 is this, and here is how you fence it ** Node 2 is this, and here is how you fence it * Resource; ** I have a file system resource call X mounted at Y ** I have a script resource that controls daemon Z * Failover Domain ** I have an ordered domain that says run on node 1 when possible, node 2 otherwise. if you fail over to node 2, stay there when node 1 returns * Service ** Create an ordered service that follows the rules set in failover damain. This service requires the FS to mount before the daemon service starts. Stop in the reverse order That's it. It might seem a little overwhelming at first, but it really is pretty simple. You already understand the concept of fencing, which trips up most people, so you're more than half-way there. So long as your switch handles multicast, your golden. If not, no big deal, just add the configuration option that forces unicast mode. hope this helps -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems