On Tue, Jul 01, 2008 at 09:35:29AM -0400, John Cronin wrote:
> It is the last part of your post that I have found causes the most problems
> in my experience: the need to propagate changes to all affected zones, in
> order to keep them identical.

Check.  It does require rigour.  On the other hand, if you apply a patch
on one node and not on all others, a zone will generally fail to attach.

> This same issue has led me to generally prefer putting a single copy of the
> application binaries and configuration on shared storage, and move it with
> the apps, rather having a local copy of the binaries and configuration on
> each node in a cluster.

Oh, we do that.  It's just the zone root that doesn't fail over;
application binaries and configuration (as much as possible) is strictly
on shared storage.

> For those who don't know, full root zones take much longer to boot - similar
> to a regular server boot.  Sparse root zones generally take a few seconds to
> boot, at most.

We don't really see that - a full root zone on our system is normally up
within 20 seconds.  Either way; in my case I was more concerned with the
time it took to shut down a zone before failing it over - this seemed to
often take on the order of minutes (particularly under "problematic"
circumstances) which was unacceptable for our environment.

> Generally, with Solaris zones, patching is the biggest problem.  When you
> are failing the zones around, it adds one more layer of complexity.

Agreed; see above note on patching.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
                                                  -- Moliere

Attachment: pgpPOPJxC7tfe.pgp
Description: PGP signature

_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

Reply via email to