There are some interesting ordering issues with respect to the steps 
required for this configuration:

1. The dataset's mount point must be within the zone's root path for it 
to be mounted read-write within that zone (you can't use lofs).

2. The dataset should not be mounted (by the global zone) at the time 
the (restricted) zone is booted (or the zone boot fails).

3. The default mount point should be changed after the dataset is 
created before assigning the dataset to the zone.

4. The mount point can be changed from within that zone after it is 
mounted (but only to a pathname within the zone).

5. When you specify that the dataset belongs to the zone (via zonecfg) 
it is mounted by the zone when the SMF service filesystem/local runs. 
This happens after the "zoneadm boot" command completes.

6. The sharing of the mounted filesystem must be done from the global 
zone since labeled zones can't be NFS servers.

When I looked at this more closely (after my second posting) I realized 
that it worked for me by accident (sorry). I did the share command by 
hand after I'd verified that the dataset was properly mounted in the 
restricted zone. But then I told you to edit the dfstab file without 
verifying that would work. As you have reported, that doesn't actually work.

The problem is that the share command in the dfstab file is processed by 
the zoneadm command (while is running in the global zone) and normally 
occurs after all filesystems are mounted (or so I thought). However, in 
the case of zfs datasets, they actually get mounted later (by the zone 
itself, not by zoneadm), so you wind up sharing the mount point before 
it is actually mounted. That makes the mount point "busy", and causes 
the SMF service for mounting local filesystem to fail. The result is the 
zone is unusable.

The obvious workaround is the remove the entry from dfstab, and do it 
later in the global zone. I don't have a very elegant solution for 
automating this. All I can think of is a script which does something 
like this:

     MP=your-global-zone-mount-path
    NOT_MOUNTED=1
   while ($NOT_MOUNTED); do
         mount -p | grep $MP >/dev/null
         NOT_MOUNTED=$?
          sleep 1
     done
    share $MP

I haven't explored other solutions, but it may be possible to express 
interest in an SMF property to determine when the zone's local 
filesystem service has completed.

It has been suggested that the share attribute could be specified via 
the zfs(1M) share option, but this won't work since it would be 
interpreted in the labeled zone instead of the global zone. Similarly 
the sharemgr doesn't seem to provide any special support for this case.

Another source of confusion is the specification of the mount point. If 
you are setting it in the global zone, you need to prefix it with the 
zone's root path. But once the zone is running, it can be specified from 
within the zone. In that case, the zone's root path should not be 
specfied. Otherwise you get that string repeated, which is not what you 
want.

I'm sorry this turned out to be a bit more complicated than I thought at 
first. But is can be made to work.

--Glenn

Danny Hayes wrote:
> - I set the mount point as follows.
>
> zfs set mountpoint=/zone/restricted/root/data zone/data
>
> - I then added the dataset to the restricted zone using zonecfg. The full 
> path to the dataset is now /zone/restricted/root/zone/restricted/root/data. I 
> am not sure if that is what you intended, but it is a result of adding it as 
> a dataset to the zone after setting the mountpoint.
>
> - I updated the /zone/restricted/etc/dfs/dfstab with the following line.
>
> /usr/bin/share -F nfs -o rw /zone/restricted/root/zone/data
>
> - During reboot I receive the following error.
>
> cannot mount 'zone/data': mountpoint or dataset is busy
> svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: 
> exit status 1
> Oct 31 14:43:08 svc.startd[19960]: svc:/system/filesystem/local:default: 
> Method "/lib/svc/method/fs-local" failed with exit status 95.
> Oct 31 14:43:08 svc.startd[19960]: system/filesystem/local:default failed 
> fatally: transitioned to maintenance (see 'svcs -xv' for details)
>
> - This is exactly the same problem that prompted the original message. 
> Service fail during boot which prevent opening a console. This only occurs 
> when you try to share the dataset. If you remove the line from 
> /zone/restricted/etc/dfs/dfstab and reboot the zone everything works fine. 
> Any ideas what I am doing wrong?
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> security-discuss mailing list
> [EMAIL PROTECTED]
>   

_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to