thanks for taking the time to look at this and sorry for the delay in
replying.  my comments are line below.

On Sat, Sep 05, 2009 at 11:13:07PM +0100, John Levon wrote:
> On Thu, May 21, 2009 at 04:55:15PM +0800, Edward Pilatowicz wrote:
> > File storage objects:
> >
> >     path:///<file-absolute>
> >     nfs://<host>[:port]/<file-absolute>
> >
> > Vdisk storage objects:
> >
> >     vpath:///<file-absolute>
> >     vnfs://<host>[:port]/<file-absolute>
> This makes me uncomfortable. The fact it's a vdisk is derivable except
> in one case: creation. And when creating, we will already want some way
> to specify the underlying format of the vdisk, so we could easily hook
> the "make it a vdisk" option there.
> That is, I think vdisks should just use path:/// and nfs:// not have
> their own special schemes.

this is easy enough to change.

but would you mind explaning what is the detection techniques are for
the different vdisk formats?  are they files with well known extensions?
all directories with well known extensions?  directories with certain

> > In order to avoid root squashing, or requiring users to setup special
> > configurations on their NFS servers, whenever the zone framework
> > attempts to create a storage object file or vdisk, it will temporarily
> > change it's uid and gid to the "xvm" user and group, and then create the
> > file with 0600 access permissions.
> Hmmph. I really don't want the 'xvm' user to be exposed any more than it
> is. It was always intended as an internal detail of the Xen least
> privilege implementation. Encoding it as the official UID to access
> shared storage seems very problematic to me. Not least, it means xend,
> qemu-dm, etc. can suddenly write to all the shared storage even if it's
> nothing to do with Xen.
> Please make this be a 'user' option that the user can specify (with a
> default of root or whatever). I'm pretty sure we'd agreed on that last
> time we talked about this?

i have no objections to adding a 'user' option.

but i'd still like to avoid defaulting to root and being subject to
root-squashing.  the xvm user seems like a good way to do this.  but if
you don't like this then i could always introduce a new user just for
this purpose, say the zonesnfs user.

> > For RAS purposes, we will need to ensure that this vdisk utility is
> > always running.  Hence we will introduce a new lofi smf service
> > svc:/system/lofi:default, which will start a new /usr/lib/lofi/lofid
> > daemon, which will manage the starting, stopping, monitoring, and
> > possible re-start of the vdisk utility.  Re-starts of vdisk utility
> I'm confused by this bit: isn't startd what manages "starting, stopping,
> monitoring, and possible re-start" of daemons? Why isn't this
> svc:/system/vdisk:default ? What is lofid actually doing?

well, as specified in the proposal, the administrative interface for
accessing vdisks is via lofi:

Here's some examples of how this lofi functionality could be used
(outside of the zone framework).  If there are no lofi devices on
the system, and an admin runs the following command:
        lofiadm -a -l /export/xvm/vm1.disk

they would end up with the following device:
        /dev/lofi/dsk0/p#               - for # == 0 - 4
        /dev/lofi/dsk0/s#               - for # == 0 - 15
        /dev/rlofi/dsk0/p#              - for # == 0 - 4
        /dev/rlofi/dsk0/s#              - for # == 0 - 15

so in this case, the lofi service would be started, and it would manage
starting and stopping a vdisk utility processes that services the
backend for this lofi device.

i originally made this a lofi service because i know that eventually it
would also be nice if we could persist lofi configuration across
reboots, and a lofi smf service would be a good way todo this.

there wouldn't really be any problem which changing this from a lofi
service to be a vdisk service.  both services would do the same thing.
each would have a daemon that keeps track of the current vdisks on the
system and ensures that a vdisk utility remains running for each one.

if you want smf to manage the vdisk utility processes directly, then
we'll have to create a new smf service each time a vdisk is accessed
and destroy that smf service each time the vdisk is taken down.

i don't really have a strong opinion on how this gets managed, so if you
have a preference then let me know and i can update the proposal.
zones-discuss mailing list

Reply via email to