Re: [zones-discuss] Suggestions for patching zones on high-availability systems

2007-07-10 Thread Enda O'Connor
Mike Gerdts wrote:
> On 7/10/07, Dave Bevans <[EMAIL PROTECTED]> wrote:
>   
>>  Hi,
>>
>>   IHAC with the folowing questions...
>>
>> Describe the problem: We would like to know if
>> you have any suggestions for patching zones on
>> high-availability systems, who wouldn't
>> necessarily have their LOFS mount point
>> available without HA, or without mounting them
>> before patching.  For example, we have Veritas
>> Cluster server mount multiple volumes within a
>> directory structure, and these are provided by
>> VCS when the cluster comes up.  These are then
>> mounted in local zones via LOFS when they boot.
>> If we try and patch and these are not available,
>> the patching fails.  Is there any possible way
>> to boot the local zone and have it ignore
>> application specific LOFS mounts without
>> changing the configuration of the zone?   Or a
>> script to populate the directory structure when
>> the disk isn't mounted to fake it out?  I will
>> write a script if there is nothing available,
>> but not reinventing the wheel if you guys have a
>> solution would be excellent. :)
>> 
>
> Instead of using lofs mounts in zonecfg, add a VCS mount resource to
> perform the mount.  Make it a child of the zone resource.  On a
> related note, you want to be sure that VCS is controlling the IP
> address as well.  If you don't then you may end up with the same IP
> address up on multiple machines at some time in the future when you
> boot the standby zone into single-user mode to make other non-patch
> changes.
>
> The following ASCII art(?) kinda outlines how the dependency tree should look.
>
>  Application
>   (e.g. multi-user)
>  /|   \
>   Mount   |\
>   (NFS)   | \
> | |  \
> IP   Mount...
> |(lofs)  (lofs)
> |  \  /
> |   \/
>   Zone  Mount
> (SAN)
>
>
> There are two paths you can take with the above layout.  If you are
> using SMF to start applications in the zone, you need them to not try
> to start until after the IP is up and all the mounts are complete.  To
> achieve that, set the default run-level of the zone to single-user,
> then create an application agent (topmost resource) that runs "init
> 3".
>
> If you are using VCS to start applications, then the topmost
> application resource (or oracle, etc.) should work out OK as is.
>
> With the above layout, you can even have "the same" (not really) zone
> booted all the time on multiple servers.
>
>   
>>  Any Idea's
>> 
>
> One other...
>
> You could create an empty directory hierarchy under the SAN mount
> point.  Suppose the SAN space mounts at /san and a zone needs stuff
> from /san/data/zone1.
>
> On the root file system of the server:
>
> mkdir -p /san/data/zone1
>
> If the zone is down, the SAN file system system (that contains
> data/zone1) can be mounted at /san, then then the zone can boot and
> lofs mount /san/data/zone1 from the SAN.  If the zone is being booted
> only for maintenance (/san not mounted), the zone will lofs mount
> /san/data/zone1 from the root file system.
>
> In some respects, this seems easier.  In other respects, it seems to
> allow for confusion, potentially with /san/data/zone1 on the root file
> system eventually getting writes that it shouldn't.
>
> Mike
>
>   

Basically if you use 119254/119255 latest rev ( actually rev's above 14 
) zones are not booted, but are actually put into a scratch zone state.
The patchadd output says booting zones, but this is slightly misleading, 
it actually uses a facility called scratch zones, I believe this will 
work for your case.

Enda
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Suggestions for patching zones on high-availability systems

2007-07-10 Thread Mike Gerdts
On 7/10/07, Dave Bevans <[EMAIL PROTECTED]> wrote:
>
>  Hi,
>
>   IHAC with the folowing questions...
>
> Describe the problem: We would like to know if
> you have any suggestions for patching zones on
> high-availability systems, who wouldn't
> necessarily have their LOFS mount point
> available without HA, or without mounting them
> before patching.  For example, we have Veritas
> Cluster server mount multiple volumes within a
> directory structure, and these are provided by
> VCS when the cluster comes up.  These are then
> mounted in local zones via LOFS when they boot.
> If we try and patch and these are not available,
> the patching fails.  Is there any possible way
> to boot the local zone and have it ignore
> application specific LOFS mounts without
> changing the configuration of the zone?   Or a
> script to populate the directory structure when
> the disk isn't mounted to fake it out?  I will
> write a script if there is nothing available,
> but not reinventing the wheel if you guys have a
> solution would be excellent. :)

Instead of using lofs mounts in zonecfg, add a VCS mount resource to
perform the mount.  Make it a child of the zone resource.  On a
related note, you want to be sure that VCS is controlling the IP
address as well.  If you don't then you may end up with the same IP
address up on multiple machines at some time in the future when you
boot the standby zone into single-user mode to make other non-patch
changes.

The following ASCII art(?) kinda outlines how the dependency tree should look.

 Application
  (e.g. multi-user)
 /|   \
  Mount   |\
  (NFS)   | \
| |  \
IP   Mount...
|(lofs)  (lofs)
|  \  /
|   \/
  Zone  Mount
(SAN)


There are two paths you can take with the above layout.  If you are
using SMF to start applications in the zone, you need them to not try
to start until after the IP is up and all the mounts are complete.  To
achieve that, set the default run-level of the zone to single-user,
then create an application agent (topmost resource) that runs "init
3".

If you are using VCS to start applications, then the topmost
application resource (or oracle, etc.) should work out OK as is.

With the above layout, you can even have "the same" (not really) zone
booted all the time on multiple servers.

>
>  Any Idea's

One other...

You could create an empty directory hierarchy under the SAN mount
point.  Suppose the SAN space mounts at /san and a zone needs stuff
from /san/data/zone1.

On the root file system of the server:

mkdir -p /san/data/zone1

If the zone is down, the SAN file system system (that contains
data/zone1) can be mounted at /san, then then the zone can boot and
lofs mount /san/data/zone1 from the SAN.  If the zone is being booted
only for maintenance (/san not mounted), the zone will lofs mount
/san/data/zone1 from the root file system.

In some respects, this seems easier.  In other respects, it seems to
allow for confusion, potentially with /san/data/zone1 on the root file
system eventually getting writes that it shouldn't.

Mike

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org