On Wed, Sep 16, 2009 at 04:34:06PM -0700, Edward Pilatowicz wrote:

> thanks for taking the time to look at this and sorry for the delay in
> replying.

Compared to /my/ delay...

> > That is, I think vdisks should just use path:/// and nfs:// not have
> > their own special schemes.
> this is easy enough to change.
> but would you mind explaning what is the detection techniques are for
> the different vdisk formats?  are they files with well known extensions?
> all directories with well known extensions?  directories with certain
> contents?

Well, the format comes from the XML property file present in the vdisk.
At import time, it's a combination of sniffing the type from the file,
and some static checks on file name (namely .raw and .iso suffixes).

> > Hmmph. I really don't want the 'xvm' user to be exposed any more than it
> > is. It was always intended as an internal detail of the Xen least
> > privilege implementation. Encoding it as the official UID to access
> > shared storage seems very problematic to me. Not least, it means xend,
> > qemu-dm, etc. can suddenly write to all the shared storage even if it's
> > nothing to do with Xen.
> >
> > Please make this be a 'user' option that the user can specify (with a
> > default of root or whatever). I'm pretty sure we'd agreed on that last
> > time we talked about this?
> i have no objections to adding a 'user' option.
> but i'd still like to avoid defaulting to root and being subject to
> root-squashing.

How about defaulting to the owner of the containing directory? If it's
root, you won't be able to write if you're root-squashed (or not root
user) anyway.

Failing that, I'd indeed prefer a different user, especially one that's
configurable in terms of uid/gid.

> there wouldn't really be any problem which changing this from a lofi
> service to be a vdisk service.  both services would do the same thing.
> each would have a daemon that keeps track of the current vdisks on the
> system and ensures that a vdisk utility remains running for each one.
> if you want smf to manage the vdisk utility processes directly, then
> we'll have to create a new smf service each time a vdisk is accessed
> and destroy that smf service each time the vdisk is taken down.

Ah, right, I see now. Yes, out of the two options, I'd prefer each vdisk
to have its own fault container (SMF service). You avoid the need for
another hierarchy of fault management process (lofid), and get the
benefit of enhanced visibility:

# svcs
online         15:33:19 svc:/system/lofi:dsk0
online         15:33:19 svc:/system/lofi:dsk1
maintenance    15:33:19 svc:/system/lofi:dsk2

Heck, if we ever do represent zones or domains as SMF instances, we
could even build dependencies on the lofi instances. (Presuming we
somehow rewhack xVM to start a service instead of an isolated vdisk

zones-discuss mailing list

Reply via email to