On Thu, Sep 17, 2009 at 01:13:53AM +0100, John Levon wrote:
> On Wed, Sep 16, 2009 at 04:34:06PM -0700, Edward Pilatowicz wrote:
>
> > thanks for taking the time to look at this and sorry for the delay in
> > replying.
>
> Compared to /my/ delay...
>
> > > That is, I think vdisks should just use path:/// and nfs:// not have
> > > their own special schemes.
> >
> > this is easy enough to change.
> >
> > but would you mind explaning what is the detection techniques are for
> > the different vdisk formats?  are they files with well known extensions?
> > all directories with well known extensions?  directories with certain
> > contents?
>
> Well, the format comes from the XML property file present in the vdisk.

there by implying that the vdisk path is a directory.  ok.  that's easy
enough to detect.

> At import time, it's a combination of sniffing the type from the file,
> and some static checks on file name (namely .raw and .iso suffixes).
>

well, as long as the suffixes above apply to directories and not to
files then i think we'd be ok.  if the extensions above will apply to
files then we have a problem.

in the xvm world, you don't have any issues with accessing the files
above since you know that every object exported to a domain contains
a virtual disk, and there for contains a label.

but with zones this isn't the case.  in my proposal there are two access
modes for files.  raw file mode, where a zpool is created directly
inside a file.  and vdisk mode, where we first create a label within the
device and then create a zpool inside one of the partitions.

so previously if the user specified:
        file:///.../foo.raw
then we would create a zpool directly within the file, no label.

and if the user specified:
        vfile:///.../foo.raw

then we would use lofi with the newly proposed -l option to access the
file, then we'd put a label on it (via the lofi device), and then create
a zpool in one of the partitions (and once again, zfs would access the
file through the lofi device).

so in the two cases, how can we make the access mode determination
without having the seperate uri syntax?

> > > Hmmph. I really don't want the 'xvm' user to be exposed any more than it
> > > is. It was always intended as an internal detail of the Xen least
> > > privilege implementation. Encoding it as the official UID to access
> > > shared storage seems very problematic to me. Not least, it means xend,
> > > qemu-dm, etc. can suddenly write to all the shared storage even if it's
> > > nothing to do with Xen.
> > >
> > > Please make this be a 'user' option that the user can specify (with a
> > > default of root or whatever). I'm pretty sure we'd agreed on that last
> > > time we talked about this?
> >
> > i have no objections to adding a 'user' option.
> >
> > but i'd still like to avoid defaulting to root and being subject to
> > root-squashing.
>
> How about defaulting to the owner of the containing directory? If it's
> root, you won't be able to write if you're root-squashed (or not root
> user) anyway.
>
> Failing that, I'd indeed prefer a different user, especially one that's
> configurable in terms of uid/gid.
>

if a directory is owned by a non-root user and i want to create a file
there, i think it's a great idea to switch to the uid of the directory
owner todo my file operations.  i'll add that to the proposal.

but, say i'm on a host that is not subject to root squashing and i need
to create a file on a share that is only writable by root.  in that
case, should i go ahead and create a file owned by root?  imho, no.
instead, i'd rather create the file as some other user.  why?  because
if the administrator then tries to migrate that zone to a host that is
subject to root squashing from the server, then i'd lose access to that
file.  eliminating all file accesses as root allows us to avoid
root-squashing and just help eliminate potential failure modes.

this would be my argument for adding a new non-root user that could be
used as a fallback for remote file access in cases that would otherwise
default to the root user.

> > there wouldn't really be any problem which changing this from a lofi
> > service to be a vdisk service.  both services would do the same thing.
> > each would have a daemon that keeps track of the current vdisks on the
> > system and ensures that a vdisk utility remains running for each one.
> >
> > if you want smf to manage the vdisk utility processes directly, then
> > we'll have to create a new smf service each time a vdisk is accessed
> > and destroy that smf service each time the vdisk is taken down.
>
> Ah, right, I see now. Yes, out of the two options, I'd prefer each vdisk
> to have its own fault container (SMF service). You avoid the need for
> another hierarchy of fault management process (lofid), and get the
> benefit of enhanced visibility:
>
> # svcs
> ...
> online         15:33:19 svc:/system/lofi:dsk0
> online         15:33:19 svc:/system/lofi:dsk1
> maintenance    15:33:19 svc:/system/lofi:dsk2
>
> Heck, if we ever do represent zones or domains as SMF instances, we
> could even build dependencies on the lofi instances. (Presuming we
> somehow rewhack xVM to start a service instead of an isolated vdisk
> process.)
>

it's a little fine grained for my tastes, but ok.

one other thing to consider is that all the services above will be
running the vdisk utility which will be shuffeling data between a lofi
device node and a vdisk file.  and since lofi nodes don't persist across
reboots, the services above shouldn't persist across a reboot either.  i
guess that the method script for the services above could delete the
service if it noticed that the corresponding device node associated with
the vdisk was missing.

i can write this into the proposal as well.

ed
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to