I am constructing a fairly complex environment to provide high availability 
using zones (containers), IPMP, Veritas Cluster Server, Veritas Volume Manager, 
and Veritas File System. In this implementation, I am concerned both about the 
high availability aspects and the maintainability aspects. I believe I have 
solved most of the problems but I just realized that I need some advice 
relative to the zone creation for the clustering environment.

What I believe to be true based on discussions with both Sun and Symantec:
- IPMP has no known issues. Network HA requires use of an IPMP managed network 
interface in the zone definition.
- DNS is used in both the global and non-global zones. (site requirement)
- The Release 4.1 versions of Veritas Volume Manager, File System, and Cluster 
Server must be used on Solaris 10. 
- Zone roots must be setup on UFS filesystems to allow them to be mounted for 
patching and/or OS upgrades.
- Zone roots can be setup on SAN LUNs as long as the HBA drivers are in the 
Solaris 10 DVD miniroot. This allows for OS upgrades. We are using the 
Sun-branded Qlogic HBAs which are handled by the Leadville driver and is part 
of the miniroot so this criterion is covered.
- In order to have multi-path failover on all of the filesystems defined both 
on Solaris slices and Veritas Volume Manager volumes, we need to turn on MPXIO 
and disable Veritas DMP. Otherwise the zone roots would not be protected from a 
path failure.
- Identically named zones must be setup on each of the target nodes of a VCS HA 
cluster in order for the nonglobal-zone-based applications to failover under 
VCS. (per Symantec)
- The zone roots must be on locally mounted disk on each node to allow them to 
be patched or upgraded while keeping the application(s) available on a 
different target node. (per Symantec)
- Performing patch or upgrade maintenance on a server hosting the zone requires 
that the zone and its application(s) be failed-over to a different target node 
and a cluster resource group freeze implemented. Once the maintenance is 
complete and the patched/upgraded zone is rebooted (and then shut back down), 
the cluster resource group can be unfrozen. This effort is then duplicated on 
each of the target nodes until all of the servers have the same patches and/or 
OS upgrades installed. (per Symantec)

Here's where I need advice:

Each of the failover-capable zones must have the same name on each hardware 
node which in turn implies that they must use the same IP address, at least 
when being setup initially. Once the zones are setup, is there any reason the 
individual zone configurations cannot be modified such that each of the zones 
has a different IP maintenance address on each hardware node? There would be 
both a DNS entry for the failover zone name that matches a virtual address 
managed by VCS and maintenance names for the zone addresses on each target node 
to provide network accessability for testing after patching/upgrades. (Rather 
like the maintenance addresses in an IPMP configuration.) If this is true, are 
the only places that would need the changes implemented the zone xml file and 
the entries in the zones /etc/hosts (and hostname.xxx) files?

Advice?

TIA,
Phil

Phil Freund
Lead Systems and Storage Administrator
Kichler Lighting
 
 
This message posted from opensolaris.org
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to