Steve,

Thanks for looking this over, responses in-line.

Steve Lawrence wrote:
>> During the zone installation and after the zone is installed, the zone's ZBE1
>> dataset is explicitly mounted by the global zone onto the zone root (note, 
>> the
>> dataset is a ZFS legacy mount so zones infrastructure itself must manage the
>> mounting.  It uses the dataset properties to determine which dataset to
>> mount, as described below.): e.g.
>>
>>      # mount -f zfs rpool/export/zones/z1/rpool/ZBE1 /export/zones/z1/root
>>
>> The rpool dataset (and by default, its child datasets) will be implicitly
>> delegated to the zone.  That is, the zonecfg for the zone does not need to
>> explicitly mention this as a delegated dataset.  The zones code must be
>> enhanced to delegate this automatically:
> 
> Is there any requirement to have a flag go disallow a zone from doing zfs/BE
> operations?  I'm not sure when an admin may want to make this restrction.

There has been no discussion about disallowing a zone from installing sw,
which is what I think you are asking for.  Would you want that to
be a general new feature or specific to ipkg branded zones?

>>      rpool/export/zones/z1/rpool
>>
>> Once the zone is booted, running a sw management operation within the zone
>> does the equivalent of the following sequence of commands:
>> 1) Create the snapshot and clone
>>      # zfs snapshot rpool/export/zones/z1/rpool/[EMAIL PROTECTED]
>>      # zfs clone rpool/export/zones/z1/rpool/[EMAIL PROTECTED] \
>>        rpool/export/zones/z1/rpool/ZBE2
>> 2) Mount the clone and install sw into ZBE2
>>      # mount -f zfs rpool/export/zones/z1/rpool/ZBE2 /a
>> 3) Install sw
>> 4) Finish
>>      # unmount /a
>>
>> Within the zone, the admin then makes the new BE active by the equivalent of
>> the following sequence of commands:
>>
>>      # zfs set org.opensolaris.libbe:active=off 
>> rpool/export/zones/z1/rpool/ZBE1
>>      # zfs set org.opensolaris.libbe:active=on 
>> rpool/export/zones/z1/rpool/ZBE2
>>
>> Note that these commands will not need to be explicitly performed by the
>> zone admin.  Instead, a utility such as beadm does this work (see issue #2).
> 
> Inside a zone, beadm should "fix" this.

This is already noted here.

> From the global zone, beadm should be able to "fix" a (halted?) zone in this
> state so that it may be booted.

I am not sure that is possible, since there was deliberate
effort by a sysadmin to get the zone into this state, it might
be difficult for a tool to automatically undo this in a reliable way.
I don't see this as a requirement since you can always manually
undo whatever the sysadmin did to set up the properties incorrectly,
assuming you can figure out which ZBEs are which.

> I think this means that the global zone should be able to do some explict
> beadm operations on a zone (perhaps only when it is halted?), in addition
> to the automatic ones that happen when the GBE is manipulated.
> 
>> When the zone boots, the zones infrastructure code in the global zone will 
>> look
>> for the zone's dataset that has the "org.opensolaris.libbe:active" property 
>> set
>> to "on" and explicitly mount it on the zone root, as with the following
>> commands to mount the new BE based on the sw management task just performed
>> within the zone:
>>
>> # umount /export/zones/z1/root
>> # mount -f zfs rpool/export/zones/z1/rpool/ZBE2 /export/zones/z1/root
>>
>> Note that the global zone is still running GBE1 but the non-global zone is
>> now using its own ZBE2.
>>
>> If there is more than one dataset with a matching
>> "org.opensolaris.libbe:parentbe" property and the
>> "org.opensolaris.libbe:active" property set to "on", the zone won't boot.
>> Likewise, if none of the datasets have this property set.
>>
>> When global zone sw management takes place, the following will happen.
>>
>> Only the active zone BE will be cloned.  This is the equivalent of the
>> following commands:
>>
>>      # zfs snapshot -r rpool/export/zones/z1/[EMAIL PROTECTED]
>>      # zfs clone rpool/export/zones/z1/[EMAIL PROTECTED] 
>> rpool/export/zones/z1/ZBE3
>>
>> (Note that this is using the zone's ZBE2 dataset created in the previous
>> example to create a zone ZBE3 dataset, even though the global zone is
>> going from GBE1 to GBE2.)
>>
>> When global zone BE is activated and the system reboots, the zone root must
>> be explicitly mounted by the zones code:
>>
>>      # mount -f zfs rpool/export/zones/z1/rpool/ZBE3 /export/zones/z1/root
>>
>> Note that the global zone and non-global zone BE names move along 
>> independently
>> as sw management operations are performed in the global and non-global
>> zone and the different BEs are activated, again by the global and non-global
>> zone.
>>
>> One concern with this design is that the zone has access to its datasets that
>> correspond to a global zone BE which is not active.  The zone admin could
>> delete the zone's inactive BE datasets which are associated with a non-active
>> global zone BE, causing the zone to be unusable if the global zone boots back
>> to an earlier global BE.
>>
>> One solution is for the global zone to turn off the "zoned" property on
>> the datasets that correspond to a non-active global zone BE.  However, there
>> seems to be a bug in ZFS, since these datasets can still be mounted within
>> the zone.  This is being looked at by the ZFS team.  If necessary, we can 
>> work
>> around this by using a combination of a mountpoint along with turning off
>> the "canmount" property, although a ZFS fix is the preferred solution.
>>
>> Another concern is that the zone must be able to promote one of its datasets
>> that is associated with a non-active global zone BE.  This can occur if the
>> global zone boots back to one of its earlier BEs.  This would then cause an
>> earlier non-global zone BE to become the active BE for that zone.  If the 
>> zone
>> then wants to destroy one of its inactive zone BEs it needs to be able to
>> promote any children of that dataset.  We must make sure that any 
>> restrictions
>> we use with the ZFS "zoned" attribute doesn't prevent this.  This may require
>> an enhancement in ZFS itself.
> 
> I think it would be generally useful if zfs had a "destroy and promote as
> necessary" operation.  Otherwise, this will just be re-implemented by various
> higher level software in annoyingly different ways.

Yes, we may need some zfs enhancements here.  We'll have to see as this
moves forward, but this sounds like a good idea.

Thanks again,
Jerry
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to