Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-12-17 Thread Peter Penchev
On Mon, Dec 1, 2014 at 9:52 PM, Solly Ross sr...@redhat.com wrote:
 Hi Peter,

 Right.  So just one more question now - seeing as the plan is to
 deprecate non-libvirt-pool drivers in Kilo and then drop them entirely
 in L, would it still make sense for me to submit a spec today for a
 driver that would keep the images in our own proprietary distributed
 storage format?  It would certainly seem to make sense for us and for
 our customers right now and in the upcoming months - a bird in the
 hand and so on; and we would certainly prefer it to be upstreamed in
 OpenStack, since subclassing imagebackend.Backend is a bit difficult
 right now without modifying the installed imagebackend.py (and of
 course I meant Backend when I spoke about subclassing DiskImage in my
 previous message).  So is there any chance that such a spec would be
 accepted for Kilo?

 It doesn't hurt to try submitting a spec.  On the one hand, the driver
 would come into life (so to speak) as deprecated, which seems kind
 of silly (if there's no libvirt support at all for your driver, you
 couldn't just subclass the libvirt storage pool backend).  On the
 other hand, it's preferable to have code be upstream, and since you
 don't have a libvirt storage driver yet, the only way to have support
 is to use a legacy-style driver.

Thanks for the understanding!

 Personally, I wouldn't mind having a new legacy driver as long as
 you're committed to getting your storage driver into libvirt, so that
 we don't have to do extra work when the time comes to remove the legacy
 drivers.

Yes, that's very reasonable, and we are indeed committed to getting
our work into libvirt proper.

 If you do end up submitting a spec, keep in mind is that, for ease of
 migration to the libvirt storage pool driver, you should have volume names of
 '{instance_uuid}_{disk_name}' (similarly to the way that LVM does it).

 If you have a spec or some code, I'd be happy to give some feedback,
 if you'd like (post it on Gerrit as WIP, or something like that).

Well, I might have mentioned this earlier, seeing as the Kilo-1 spec
deadline is almost upon us, but the spec itself is at
https://review.openstack.org/137830/ - it would be great if you could
spare a minute to look at it.  Thanks in advance!

G'luck,
Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-12-01 Thread Solly Ross
Hi Peter,

 Right.  So just one more question now - seeing as the plan is to
 deprecate non-libvirt-pool drivers in Kilo and then drop them entirely
 in L, would it still make sense for me to submit a spec today for a
 driver that would keep the images in our own proprietary distributed
 storage format?  It would certainly seem to make sense for us and for
 our customers right now and in the upcoming months - a bird in the
 hand and so on; and we would certainly prefer it to be upstreamed in
 OpenStack, since subclassing imagebackend.Backend is a bit difficult
 right now without modifying the installed imagebackend.py (and of
 course I meant Backend when I spoke about subclassing DiskImage in my
 previous message).  So is there any chance that such a spec would be
 accepted for Kilo?

It doesn't hurt to try submitting a spec.  On the one hand, the driver
would come into life (so to speak) as deprecated, which seems kind
of silly (if there's no libvirt support at all for your driver, you
couldn't just subclass the libvirt storage pool backend).  On the
other hand, it's preferable to have code be upstream, and since you
don't have a libvirt storage driver yet, the only way to have support
is to use a legacy-style driver.

Personally, I wouldn't mind having a new legacy driver as long as
you're committed to getting your storage driver into libvirt, so that
we don't have to do extra work when the time comes to remove the legacy
drivers.

If you do end up submitting a spec, keep in mind is that, for ease of
migration to the libvirt storage pool driver, you should have volume names of
'{instance_uuid}_{disk_name}' (similarly to the way that LVM does it).

If you have a spec or some code, I'd be happy to give some feedback,
if you'd like (post it on Gerrit as WIP, or something like that).

Best Regards,
Solly Ross


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-11-26 Thread Solly Ross
Hi!
 
 Some days ago, a bunch of Nova specs were approved for Kilo. Among them was
 https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools
 
 Now, while I do recognize the wisdom of using storage pools, I do see a
 couple of possible problems with this, especially in the light of my
 upcoming spec proposal for using StorPool distributed storage for the VM
 images.
 
 My main concern is with the explicit specification that the libvirt pools
 should be of the directory type, meaning that all the images should be
 visible as files in a single directory. Would it be possible to extend the
 specification to allow other libvirt pool types, or to allow other ways of
 pointing Nova at the filesystem path of the VM image?

The specification was never intended to restrict storage pools to being
file-based.  In fact, it was my intention that all different types of pools
be supported.  The specification dedicates several paragraphs to discussing
file-based pools, since transitioning from legacy file-based backends to
the storage pool backend requires a bit of work, while other backends, like
LVM, can simply be turned into a pool without any movement or renaming of the
underlying volumes.

In fact, LVM works excellently (it's one of the pool types I use frequently
in testing to make sure migration works regardless of source and destination
pool type).

 
 Where this is coming from is that StorPool volumes (which we intend to write
 a DiskImage subclass for) appear in the host filesystem as
 /dev/storpool/volumename special files (block devices). Thus, it would be...
 interesting... to find ways to make them show up under a specific directory
 (yes, we could do lots and lots of symlink magic, but we've been down that
 road before and it doesn't necessarily lead to Good Things(tm)). I see that
 the spec has several mentions of yeah, we should special-case Ceph/RBD
 here, since they do things in a different way- well, StorPool does things
 in a slightly different way, too :)

The reason that I wrote something about Ceph/RBD is that the Ceph storage driver
in libvirt is incomplete -- it doesn't yet have support for
virStorageVolCreateXMLFrom, so we need to work around that.

 
 And yes, we do have work in progress to expose the StorPool cluster's volumes
 as a libvirt pool, but this might take a bit of time to complete and then it
 will most probably take much more time to get into the libvirt upstream
 *and* into the downstream distributions, so IMHO okay, let's use different
 libvirt pool types might not be entirely enough for us, although it would
 be a possible workaround.

The intention was that new storage pool types should try to get themselves in
as new libvirt storage pool drivers, and then they should just work in Nova
(there is one line that needs to be modified, but other than that, you
should just be able to start using them).

 
 Of course, it's entirely possible that I have not completely understood the
 proposed mechanism; I do see some RBD patches in the previous incarnations
 of this blueprint, and if I read them right, it *might* be trivial to
 subclass the new libvirt storage pool support thing and provide the
 /dev/storpool/volumename paths to the upper layers. If this is so, feel free
 to let me know I've wasted your time in reading this e-mail, in strong terms
 if necessary :)

I dislike using strong terms ;-), but I do think you may have misread the 
spec.
If you are unclear, you can catch me next week on freenode as directxman12 and
we can discuss further (I'm out on PTO this week).

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-11-26 Thread Peter Penchev
On Wed, Nov 26, 2014 at 10:26 PM, Solly Ross sr...@redhat.com wrote:
 Hi!

 Some days ago, a bunch of Nova specs were approved for Kilo. Among them was
 https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools

 Now, while I do recognize the wisdom of using storage pools, I do see a
 couple of possible problems with this, especially in the light of my
 upcoming spec proposal for using StorPool distributed storage for the VM
 images.

 My main concern is with the explicit specification that the libvirt pools
 should be of the directory type, meaning that all the images should be
 visible as files in a single directory. Would it be possible to extend the
 specification to allow other libvirt pool types, or to allow other ways of
 pointing Nova at the filesystem path of the VM image?

 The specification was never intended to restrict storage pools to being
 file-based.  In fact, it was my intention that all different types of pools
 be supported.  The specification dedicates several paragraphs to discussing
 file-based pools, since transitioning from legacy file-based backends to
 the storage pool backend requires a bit of work, while other backends, like
 LVM, can simply be turned into a pool without any movement or renaming of the
 underlying volumes.

Ah, thanks.  That makes sense.  Guess I'll have to actually read the
proposed patches then.

 In fact, LVM works excellently (it's one of the pool types I use frequently
 in testing to make sure migration works regardless of source and destination
 pool type).


 Where this is coming from is that StorPool volumes (which we intend to write
 a DiskImage subclass for) appear in the host filesystem as
 /dev/storpool/volumename special files (block devices). Thus, it would be...
 interesting... to find ways to make them show up under a specific directory
 (yes, we could do lots and lots of symlink magic, but we've been down that
 road before and it doesn't necessarily lead to Good Things(tm)). I see that
 the spec has several mentions of yeah, we should special-case Ceph/RBD
 here, since they do things in a different way- well, StorPool does things
 in a slightly different way, too :)

 The reason that I wrote something about Ceph/RBD is that the Ceph storage 
 driver
 in libvirt is incomplete -- it doesn't yet have support for
 virStorageVolCreateXMLFrom, so we need to work around that.

OK, so we'll make sure to implement it in our own libvirt storage driver :)

 And yes, we do have work in progress to expose the StorPool cluster's volumes
 as a libvirt pool, but this might take a bit of time to complete and then it
 will most probably take much more time to get into the libvirt upstream
 *and* into the downstream distributions, so IMHO okay, let's use different
 libvirt pool types might not be entirely enough for us, although it would
 be a possible workaround.

 The intention was that new storage pool types should try to get themselves in
 as new libvirt storage pool drivers, and then they should just work in Nova
 (there is one line that needs to be modified, but other than that, you
 should just be able to start using them).


 Of course, it's entirely possible that I have not completely understood the
 proposed mechanism; I do see some RBD patches in the previous incarnations
 of this blueprint, and if I read them right, it *might* be trivial to
 subclass the new libvirt storage pool support thing and provide the
 /dev/storpool/volumename paths to the upper layers. If this is so, feel free
 to let me know I've wasted your time in reading this e-mail, in strong terms
 if necessary :)

 I dislike using strong terms ;-), but I do think you may have misread the 
 spec.
 If you are unclear, you can catch me next week on freenode as directxman12 
 and
 we can discuss further (I'm out on PTO this week).

Right.  So just one more question now - seeing as the plan is to
deprecate non-libvirt-pool drivers in Kilo and then drop them entirely
in L, would it still make sense for me to submit a spec today for a
driver that would keep the images in our own proprietary distributed
storage format?  It would certainly seem to make sense for us and for
our customers right now and in the upcoming months - a bird in the
hand and so on; and we would certainly prefer it to be upstreamed in
OpenStack, since subclassing imagebackend.Backend is a bit difficult
right now without modifying the installed imagebackend.py (and of
course I meant Backend when I spoke about subclassing DiskImage in my
previous message).  So is there any chance that such a spec would be
accepted for Kilo?

G'luck,
Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev