Hi Phil,

Advice below...

Phil Freund wrote:
I am constructing a fairly complex environment to provide high availability 
using zones (containers), IPMP, Veritas Cluster Server, Veritas Volume Manager, 
and Veritas File System. In this implementation, I am concerned both about the 
high availability aspects and the maintainability aspects. I believe I have 
solved most of the problems but I just realized that I need some advice 
relative to the zone creation for the clustering environment.

What I believe to be true based on discussions with both Sun and Symantec:
- IPMP has no known issues. Network HA requires use of an IPMP managed network 
interface in the zone definition.
- DNS is used in both the global and non-global zones. (site requirement)
- The Release 4.1 versions of Veritas Volume Manager, File System, and Cluster Server must be used on Solaris 10. - Zone roots must be setup on UFS filesystems to allow them to be mounted for patching and/or OS upgrades.
- Zone roots can be setup on SAN LUNs as long as the HBA drivers are in the 
Solaris 10 DVD miniroot. This allows for OS upgrades. We are using the 
Sun-branded Qlogic HBAs which are handled by the Leadville driver and is part 
of the miniroot so this criterion is covered.
- In order to have multi-path failover on all of the filesystems defined both 
on Solaris slices and Veritas Volume Manager volumes, we need to turn on MPXIO 
and disable Veritas DMP. Otherwise the zone roots would not be protected from a 
path failure.
- Identically named zones must be setup on each of the target nodes of a VCS HA 
cluster in order for the nonglobal-zone-based applications to failover under 
VCS. (per Symantec)
- The zone roots must be on locally mounted disk on each node to allow them to 
be patched or upgraded while keeping the application(s) available on a 
different target node. (per Symantec)
- Performing patch or upgrade maintenance on a server hosting the zone requires 
that the zone and its application(s) be failed-over to a different target node 
and a cluster resource group freeze implemented. Once the maintenance is 
complete and the patched/upgraded zone is rebooted (and then shut back down), 
the cluster resource group can be unfrozen. This effort is then duplicated on 
each of the target nodes until all of the servers have the same patches and/or 
OS upgrades installed. (per Symantec)

Here's where I need advice:

Each of the failover-capable zones must have the same name on each hardware node which in turn implies that they must use the same IP address, at least when being setup initially. Once the zones are setup, is there any reason the individual zone configurations cannot be modified such that each of the zones has a different IP maintenance address on each hardware node? There would be both a DNS entry for the failover zone name that matches a virtual address managed by VCS and maintenance names for the zone addresses on each target node to provide network accessability for testing after patching/upgrades. (Rather like the maintenance addresses in an IPMP configuration.) If this is true, are the only places that would need the changes implemented the zone xml file and the entries in the zones /etc/hosts (and hostname.xxx) files?

Advice?

There is no reason that you cannot (or should not) add a second IP addr to each zone, as you describe. This IP addr would be 'static' i.e. it would not move around during failover (or any other operation). It can be on the same NIC as the virtual address or another NIC - although IPMP restrictions might limit your choices, it's been awhile since I've used IPMP.

I would do this with zonecfg, either when the zone is being created or afterward. If you can do that, the zones framework will take care of the rest.

I seem to recall that Symantec's zone-creation process requires manual editing of the zone xml file, which is unfortunate as it puts you in shaky Solaris-support territory. We have discussed this with them, and understand that there isn't a good alternative. If manual editing is required, it sounds like you know what you're doing - the xml file, /etc/inet/hosts and hostname.xxx files will need 'help.'


--
--------------------------------------------------------------------------
Jeff VICTOR              Sun Microsystems            jeff.victor @ sun.com
OS Ambassador            Sr. Technical Specialist
Solaris 10 Zones FAQ:    http://www.opensolaris.org/os/community/zones/faq
--------------------------------------------------------------------------
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to