On 5/4/07, Ramesh Mudradi <[EMAIL PROTECTED]> wrote:
Can some one shed some light on pros/cons on various ways of mounting a file
system onto local zone ? I believe there are four different ways to mount a
device/file system on loca zone as per the below url, but it is not very clear
how they are different.
1. Create a file system in the global zone and mount it in the local
zone as a loopback file system (lofs).
I would use this only if one or more of the following is true:
- The file system will be shared by multiple non-global zones
- The file system will be shared by the global zone and one or more
- The file system is not supported directly in a non-global zone (e.g.
Veritas Cluster File System)
2. Create a file system in the global zone and mount it to the local
zone as UFS.
This is the way that I would do it by default. The one optimization
that I typically do is use an SVM soft partition that is as small as
possible so that space is available for other file systems,
potentially in other zones. Soft partitions and UFS file systems on
top of them can be grown online. They cannot shrink (without backup +
3. Export a device from the global zone to a local zone and mount it.
I would only do this if I want the non-global zone administrator to be
able to have raw access to the device for some reason. Perhaps so
that the ngz admin can newfs it? I thought there were other problems
UFS snapshots would be another interesting use, but I am pretty sure
that this will not work because you would also need /dev/fssnapctl,
/dev/fssnap/, and likely need the ability to create device nodes in
I've never found a production use for this configuration.
4. The device is already made available to the local zone. Mount a UFS
file directly into the local zone's directory structure.
This is the same as 3, but part of the work was already done.
5. (not mentioned) Allocate a zfs dataset
If ZFS is being used on the server, a zfs dataset can be delegated to
the zone. This allows ngz admin to do storage administration
(snapshots, create zfs file systems in the dataset, etc.) and the gz
admin can shuffle disk allocation between datasets (shrink one, grow
another) without disruption with the caveat that you can't take away
used space without deleting data.
zones-discuss mailing list