RH Cluster is a bad
joke.
I have used various HA
solutions, including VCS, SunCluster, HACMP, and even MSCS, and without
a doubt, RH Cluster sux. It lacks features, and its main defenseĀ
mechanism against split-brain is "Shoot the Other in the Head" via its
UPS, it's Fibre link, or the likes (they call if "Fence"). Instead of
better logic (how to detect split-brain? How to prevent it?), they use
brute-force in a way I didn't like.
In my simple tests
(used HTTPD as a resource) the cluster was unable to recover from a
simple "pkill httpd" on the active node, and completely flunked my
tests.
I would recommend you
check Linux-HA. It is looking OK, seems adjustable to your needs, and
would probably work better. It is a bit more complicated to setup
(although it's not too complicated), but it can be controlled via
simple scripts, which can probably do what you wanted it to do.
Ez.
Ira Abramov wrote:
Quoting Vitaly Karasik, from the post of Sun, 20 Aug:
so, is there a config error here, or should I dump the whole iSCSI
concept? is there a way to install a red-hat cluster of three
CENTOS3 machines with no common storage? I just need IP addresses
and processes moving around between the nodes, the application
vendor ONLY supports Red Hat 3 and its clustering, but won't supply
instructions or recommended procedures. arrrrggh!
As far as I remember, RHEL3 Cluster Manager cannot work without shared
storage and doesn't support iSCSI device as a shared storage (at
least, RH doesn't promise that this configuration will work stable)
it works just fine. RHEL Cluster with two common raw devices for the
quorum, I didn't bother setting up GFS at the end, since it was not
important.
I was very disappointed from the RH cluster manager though. all it does
it move a list of services without dependency on eachother. it's quite a
lot but it's missing some needed features, like defining a logical link
or block - service A and B must migrate to new nodes together, but not
to one that already runs service C for instance. nope, I can only define
to which nodes each service migrates and that's it. For instance, y
client wanted a very simple case where three machines run two services.
if any of the three machines fails, the other two take over the two
services that need to run, but I can't have both services migrating to
the same node, and now I cannot prevent this using this tool, I'll have
to make funny improvizations in the startup files to get it to "fail"
for the cluster manager and force it to migrate it further to another
node if this one is busy. this is an ugly kludge, and the only "right"
solutiong, per RHEL, is to have 4 rather than 3 machines, each pair
takes care of one service and that it. rediculous :-(