Hi everyone,
I'm getting my hands on Linux-HA and to learn it in a sandbox, I'm
setting up 2 Ubuntu Hardy 8.04 VMs configured as follows:
2 NICs - one for internet access and 1 for heartbeat protocol
Heartbeat 2.1.3 installed via apt-get
hb_gui installed via apt-get
I minimally configured ha.cf, haresources and authkeys by watching
Alan's screencast and got it up and running pretty quickly. I'm
finding it a pretty large leap to proceed from the screencast. If
there are any books that give a thorough rundown on Linux-HA I'd
appreciate it. I've looked at tons of FAQs and wiki entries, but
until I have the full framework in my head, they seem very fragmented,
so pointers to something more complete would be helpful.
After I got the basic cluster up, I started using hb_gui to begin
learning the behavior of the system and used it to set up a
configuration like so:
Resource group with apache2, postgresql, and IPAddr2 resources. By
stopping heartbeat on one machine, I can watch the failover occur, IP
address takeover happen, etc. Some basic actions were happening as I
would expect so that's cool.
The questions I have at this stage of my learning are part practical,
part conceptual, part stupid:
1) After using hb_gui to configure the cluster, I could not find any
configuration file that I initially edited to be changed. Thinking
hb_gui may have created a cib.xml file, I did a global find on
cib.xml, but none exists. Question is: where is the configuration I
modified using hb_gui being stored?
2) I'm trying to understand the behavior of the various entities. In
hb_gui, you can create a native resource, not select the checkbox for
clone or master/slave and then add it. Or you can chose a master/
slave or clone type, but none of these can be added to a group.
Conceptually, I'm missing something. Why can't a clone or master/
slave resource be part of a group? When a resource is not set to be
a clone or master/slave, what is it called?
3) I'm expecting to evolve to a cluster that works something like this:
a) two-node, primary/secondary - all application requests are
coming only to primary node unless failover occurs
b) apache, postgresql, mysql and some custom services are always
running on both machines to reduce startup times on failover
c) I'm constantly replicating between postgresql and mysql on
primary to secondary
I plan to use IPAddr2 for IP address takeover (ie: eth0:0 ), so I will
need to bind apache to that interface, but the interface wouldn't
exist on the secondary machine until failover. I'm trying to avoid
apache start up time by having it always running on both boxes.
What's the best way to dynamically bind apache to eth0:0? Not sure if
I'm asking the right question. Maybe more generally, does it make
sense to have apache running on both machines when IP address takeover
is configured?
Other questions I have are these:
1) Is it possible to force the DC node to switch to a different
cluster node?
2) If I want one instance of postgresql to always run on each cluster
node, but during failover, I don't want an instance to be started
elsewhere (it's already got one), how is that set up? Right now, if I
run clones on all nodes, during failover, new resources are shown as
started on the working node - for example, the postgresql2 resource
running on the 2nd node starts running on the first node if the node2
goes down. I don't want that behavior but not sure how to describe
the configuration to hb_gui or even describe it conceptually in Linux-
HA terms.
Well, that's probably enough questions right now - I figure if I can
get a grip on some of these, maybe other questions I have will answer
themselves.
Thanks for any help through the learning curve here,
Landon
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems