On Sat, Feb 9, 2013 at 4:23 AM, Adrian Allen <[email protected]> wrote:
> Thanks for the reply, Lon.
>
> So it sounds like I may have misunderstood the purpose of this plugin to
> start with - the "Guest Fencing" page says this is the plugin to use for my
> situation. Is that page incorrect or am I still not grasping what we're
> talking about?
>
> I can use just about any software I want, I'm just trying to accomplish
> getting stonith working on a simple cluster where there will be two KVM
> hosts, each running some number of KVM guests on SL 6.2 or 6.3.  I want to
> be able to use corosync/pacemaker to manage HA services by shooting a
> misbehaving VM and moving services to another VM on a different host.
>
> Each guest is a single-purpose machine which will run an openvpn instance,
> or a jboss app, or a webserver, etc.  - so I don't really care whether the
> monitoring for health of services is done from inside the VM or from its
> host (i.e. the guest does not necessarily need to know its part of a
> cluster).
>
> Is there a method you would suggest for accomplishing this?

Not quite yet.

I had a chat with Lon today and the "checkpoint" backend is close to
what we want but requires cman and openais.
Those two dependancies need to be removed.

Essentially you'll basically just be running corosync (instead of
qpid) on the hosts (no need for pacemaker).
If you happen to have some C skills I'd be happy to point you in the
right direction.

>
> Since I haven't been able to get stonith working reliably, right now we are
> just using manual failover in the event of a problem, and only using HA
> clustering on services where stonith is not particularly important
> (example, openvpn servers where there is no danger of data corruption if
> the service is started in two locations by mistake).
>
>
>
>
> On Fri, Feb 8, 2013 at 9:46 AM, Lon Hohberger <[email protected]> wrote:
>
>> On 01/08/2013 06:47 PM, Adrian Allen wrote:
>> > I'm new to clustering, and I'm trying to get fencing working on a small
>> > cluster consisting of only two KVM hosts which are each running one
>> guest.
>> > The hosts are running many other guests, but only one guest on each host
>> is
>> > part of the cluster.
>> >
>> > Corosync/pacemaker work fine, the cluster operates normally and services
>> > fail back and forth correctly, etc. - but I can't get everything to show
>> up
>> > correctly with 'fence_xvm -o list'.
>> >
>> > If I do not use qmf/qpidd, then each host can see its guest, and the
>> guests
>> > can see themselves only (i.e. box1 sees box1 but not box2, box2 sees box2
>> > but not box1).  It is stated on the wiki that qpid should be used for
>> > guests running on different hosts anyway, so I'm assuming that this
>> problem
>> > comes from not using qpid.
>> >
>> > However, if I *do* use qpid, nothing shows up in 'fence_xvm -o list' at
>> > all, on any guest or host.
>> >
>> > I'm running SL 6.3 and I've installed fence-virtd-libvirt-qpid
>> libvirt-qmf.
>> > My fence_virt.conf is:
>> >
>> > backends {
>> > libvirt-qpid {
>> > uri = "qemu:///system";
>> > }
>>
>> Sorry for the very delayed response.
>>
>> So - the libvirt-qpid plugin needs to have both hosts connected to the
>> same broker; it's not like the checkpoint plugin, which does
>> auto-sharing of VM states across a cluster.
>>
>> Alternatively, your qpid brokers could be routed through a central broker.
>>
>> (In theory, you could run a replicated broker using corosync as the
>> backing transport for two instances of qpid, but I've never tried this)
>>
>> You can use 'host' and 'port' to connect the fence-virt-libvirt-qmf
>> instances to other brokers; see fence_virt.conf man page.
>>
>> -- Lon
>>
>>
>> >
>> > }
>> >
>> > listeners {
>> > multicast {
>> > port = "1229";
>> > family = "ipv4";
>> >  address = "225.0.0.12";
>> > key_file = "/etc/cluster/fence_xvm.key";
>> > }
>> >
>> > }
>> >
>> > fence_virtd {
>> > module_path = "/usr/lib64/fence-virt";
>> > backend = "libvirt-qpid";
>> >  listener = "multicast";
>> > }
>> >
>> >
>> > The key files match on all hosts, iptables is completely disabled, and
>> > fence_virtd is running only on the hosts, not guests.  libvirt-qmf is
>> > running on all guests and hosts.
>> >
>> > Has anyone gotten this configuration to work successfully?  Or
>> > alternatively, does anyone know of another method to manage stonith vs.
>> kvm
>> > guests on separate host machines?
>> > _______________________________________________
>> > Linux-HA mailing list
>> > [email protected]
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> > See also: http://linux-ha.org/ReportingProblems
>> >
>>
>>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to