----- Original Message -----
> From: "Adam Litke" <a...@us.ibm.com>
> To: "Shu Ming" <shum...@linux.vnet.ibm.com>
> Cc: "engine-devel" <engine-de...@ovirt.org>, "VDSM Project Development" 
> <vdsm-devel@lists.fedorahosted.org>
> Sent: Tuesday, January 22, 2013 2:20:19 PM
> Subject: Re: [vdsm] [Engine-devel]  RFC: New Storage API
> On Tue, Jan 22, 2013 at 11:36:57PM +0800, Shu Ming wrote:
> > 2013-1-15 5:34, Ayal Baron:
> > >image and volume are overused everywhere and it would be extremely
> > >confusing to have multiple meanings to the same terms in the same
> > >system (we have image today which means virtual disk and volume
> > >which means a part of a virtual disk).
> > >Personally I don't like the distinction between image and volume
> > >done in ec2/openstack/etc seeing as they're treated as different
> > >types of entities there while the only real difference is
> > >mutability (images are read-only, volumes are read-write).
> > >To move to the industry terminology we would need to first change
> > >all references we have today to image and volume in the system (I
> > >would say also in ovirt-engine side) to align with the new
> > >meaning.
> > >Despite my personal dislike of the terms, I definitely see the
> > >value in converging on the same terminology as the rest of the
> > >industry but to do so would be an arduous task which is out of
> > >scope of this discussion imo (patches welcome though ;)
> > 
> > Another distinction between Openstack and oVirt is how the
> > Nova/ovirt-engine look upon storage systems. In Openstack, a stand
> > alone storage service(Cinder) exports the raw storage block device
> > to Nova. On the other hand, in oVirt, storage system is highly
> > bounded with the cluster scheduling system which integrates storage
> > sub-system, VM dispatching sub-system, ISO image sub systems. This
> > combination make all of the sub-system integrated in a whole which
> > is easy to deploy, but it make the sub-system more opaque and not
> > harder to reuse and maintain. This new storage API proposal give us
> > an opportunity to distinct these sub-systems as new components
> > which
> > export better, loose-coupling APIs to VDSM.
> A very good point and an important goal in my opinion.  I'd like to
> see
> ovirt-engine become more of a GUI for configuring the storage
> component (like it
> does for Gluster) rather than the centralized manager of storage.
>  The clustered
> storage should be able to take care of itself as long as the peer
> hosts can
> negotiate the SDM role.
> It would be cool if someone could actually dedicate a
> non-virtualization host
> where its only job is to handle SDM operations.  Such a host could
> choose to
> only deploy the standalone HSM service and not the complete vdsm
> package.

OpenStack and oVirt have different architectures and goals.

Even though they are both marketed as IaaS solutions they are designed for
different purposes.

OpenStack is designed around the idea of simplifying the *development* and
*integration* of IaaS subsystems through standardization of interfaces. If you
design a system that requires access to some type of infrastructural resource
you can develop against the OpenStack API for that specific resource and you
can consume different underlying implementations of the subsystem.
Alternatively if you are creating a new subsystem implementations one of your
exposed APIs can be the appropriate OpenStack API.

In short, they are a group of loosely coupled services meant to be used
replicated and distributed in a cluster. Everyone can create they own
implementations of the APIs.

oVirt is designed around the idea of simplifying the *management* of said 

The ovirt-engine is the cluster manager and VDSM is the host-manager. Every
host in the cluster has a host manager installed on it (VDSM) and some
(currently only 1) might have the cluster-manager (ovirt-engine) and they are
the effective brain. oVirt ideally only has managing entities. VDSM APIs
delegate to other subsystems tasks that are in it's scope, the subsystems have
their own APIs. For VMs you have libvirt, for networking you have the linux
management tools and maybe netcf for policy we now have MOM. For iscsi we have
iscsiadm, etc. The only odd one out is the image provisioning subsystem which I
will get in to, don't worry.

This means, if you didn't already gather, that no host managed by ovirt can
exist without either VDSM or the ovirt-engine living on it. That being said, I
am a huge proponent of making all subsystems optional. Meaning you can have
VDSM that doesn't have the libvirt or networking glue bits and just has
storage, and gluster.

To put it simply, no host without a *managing* entity on it.

But, as you all have pointed out, VDSM is redundant. There is no reason why the
engine can't just directly ask libvirt to do things. There is no reason why we
can't make a general iscsi management API and expose it on it's own,
independent from other services. VDSM is frankensteinesque abomination of
misplaced BL and pass-through APIs.

This is why everyone are having a hard time figuring out what to do with it. I
have people on one side bothering me about putting more *management* logic in.
On the other side people are calling for standardized APIs which can't exist
with engine specific management logic.

To sum up this long rant, if you want to split up the bits that make VDSM in to
independent subsystems just don't work on VDSM because VDSM (excluding the
storage subsystem) is just glue anyway. Better just write another OpenStack
back-end for something.

As for the storage core. It is the only thing in VDSM that is not glue
and actually needs to sit on the host. The new storage code I'm writing is
actually developed in the hope that it will be independent from the rest of
VDSM and VDSM will just have glue, like everything else.

> --
> Adam Litke <a...@us.ibm.com>
> IBM Linux Technology Center
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list

Reply via email to