Jerry Jelinek wrote:
> Steffen Weiberle wrote:
>> I have a customer who is restricting NFS access to their data. Thus,
>> instead of authorizing each non-global zone to be a client, would like
>> to authorize only the global zone, and then lofs mount the NFS-mounted
>> file system into the non-global zone.
>> This work manually, but fails when configured via zonecfg.
>> globalzone# zoneadm -z amp1 boot
>> cannot verify fs /net/nfsserver/export/rw/code: NFS mounted file-system.
>>          A local file-system must be used.
>> zoneadm: zone amp1 failed to verify
>> Per this old email, there is some reason for preventing this.
>> Anybody know the details, and is doing this manually putting the
>> non-global zones or the system at risk?
> Some of the comments from the following RFE apply to this case
> as well:
> 4963321 RFE: hosting root filesystems for zones on NFS servers

Hi Jerry,

Thanks. Without looking at the RFE,

> Since the comments aren't public I pasted the relevant stuff here:
>     They're primariliy necessary due to (among other things) the fact
>     that an NFS operation may translate to an over-the-wire call, and
>     in fact may open a TCP/UDP connection to the server (if the mount
>     has been inactive for long enough, or there is a failover, or a
>     number of other reasons).
>     While the semantics of process P in zone A doing a read(2) that
>     causes zone B to do network activity on its behalf merely seemed
>     odd (recall that zones have distinct network identities), the
>     scenario of zone A performing a mount, and subsequently zone B
>     opening a new connection to the server such that the client appears
>     to have "migrated" from zone A to zone B seemed downright wrong.

I don't follow the A to B argument. Zone A (the global zone, or the 
kernel which may be best identified by the global zone) is always making 
the requests, and hiding that it is B, or C, or D... who may be 
consuming or creating the data.

Similar to how lofs seems to work for local file systems. The 
understanding I have when loopback mounting a file into a zone is that 
in the end the global zone has the definitive view of the data. It does 
seem more direct with local FS, where no other processes are involved.

In contrast, when I delegate a ZFS file system to a zone, only that zone 
can interact with the data. I have explicitly chosen to remove global 
knowledge of and access to that file system. That is similar to when I 
mount an NFS file system in a zone.

>     Besides this is the fact that NFS talks to various userland
>     entities (such as statd and lockd) in the course of normal
>     operation; a process in zone A using a mount belonging to zone B
>     would naturally be unable to communicate with these processes due
>     to our restrictions on inter-zone communication.

Yes, the global zones is acting as a proxy, intentionally, when I choose 
to make this mount available to a non-global zone. Thus all access to 
the origin are from the perspective of the global zone.

>     Also recall that zone A and B don't necessarily share the same
>     UID-to-user mappings, and have different host keys and principles
>     which would cause severe problems when attempting to use something
>     like secure or kerberized NFS.

Good point regarding secure NFS. But for all others, the mapping of IDs 
is already at risk of being different on any loopback mount, whether it 
is local or NFS.

Regarding the behavior I saw, it seems to come down to where is a 
difference in manual vs. automatic mounts to be handled. Should mount 
have tested for this and failed? Mount does not directly know that the 
mount point is in a zone. zoneadm[d] knows this is on behalf of zone, 
and denies an operation that I can do manually. Is the operation of the 
system any different in the end whether it is done automatically or 
manually, or just easier to ignore the manual case? And if this were 
using secure NFS, would the manual loopback mount operation still work?

[Oddly, I just came across this difference again today, on some work I 
had done a while back. I create the mount manually, and also configured 
the zone to do this on its next reboot, which happened today, and only 
now was I reminded that this operation should[?] not be done.]


> Jerry

zones-discuss mailing list

Reply via email to