Glenn Faden wrote: > Darren J Moffat wrote: >> This sounds reasonable to me, I don't see any downsides of doing this. > OK. >> >> I've been thinking recently about a semi related topic. I've been >> wondering if there is value in having a set of ZFS filesystem (not >> file) properties that gives the min and max labels a file system >> should be visible at (maybe it is just one property which is the >> label). The idea being that for ZFS we store mountpoints and share >> information as properties it seems to make sense that for TX we would >> also store the mount label as well. > > The mountpoint associated with a ZFS dataset has an implied label based > on the zone configuration databases. I think you are suggesting to make > this more robust so that we could verify that the label of the > mountpoint (as returned by getflabel before mounting the data set) is > the same as or dominates the label stored as an attribute of the dataset.
Not just more robust but scalable. I'm thinking of data filesystems store in ZFS where there are lots of them (eg every user has a ZFS filesystem of their own at each label). Consider 1000 users have home directories on a given system and each user has 4 labels in their range. That is 4000 separate ZFS filesystems. Lets also say that snapshot/clone/create are delegated in ZFS to the individual users. I don't want any of that information in the zonecfg databases. The idea is to NOT put the information in the zone configuration files about which zones see a given filesystem but store it with the filesystem. I might be missing something here but I'm thinking about big deployments with lots and lots of ZFS filesystems (because they are really cheap!), all out of a single pool. -- Darren J Moffat
