On 2014-10-22 10:05 AM, John Griffith wrote:

Ideas started spreading from there to "Using a Read Only Cinder Volume
per image", to "A Glance owned Cinder Volume" that would behave pretty
much the current local disk/file-system model (Create a Cinder Volume
for Glance, attach it to the Glance Server, partition, format and
mount... use as image store).

To add to John Griffith's explanation:

This is a feature we have wanted for *several* months and finally implemented in-house in a different way directly in Cinder.

Creating a volume from an image can takes *several* minutes depending on the Cinder backend used. For someone using BFV as its main way to boot instances, it is a *HUGE* issue.

This causes several problems:

- When BFV, Nova thinks the volume creation failed because it took more than 2 minutes to create the volume from the image. Nova will then "retry" the volume creation, still without success, and instance will go in ERROR state.

You now have 2 orphan volumes in Cinder. This is because Nova cannot cleanup after itself properly due to volumes still being in "creating" state when deletion is attempted by Nova.

- So you try to create the volume yourself first and ask Nova to boot on it. When creating a volume from an image in Cinder (not through Nova), from a UX perspective, this time is too long.

Time required adds up when using a SolidFire backend with QoS. You have the time to get several coffees and a whole breakfast with your friends to talk about how creation a volume from an image is too damn slow.

What we did to fix the issue:

- We created a special tenant with "golden volumes" which are in fact volumes created from images. Those golden volumes are used to optimize the volume creation.

The SolidFire driver has been modified so that when you create a volume from an image, it first tries to see if there is a corresponding golden volume in that special tenant. If one is found, volume is cloned into the appropriate tenant in a matter of seconds. If none is found, normal creation process is used.

AFAIK, some storage backends (like Ceph) addressed the issue by implementing "themselves" in all the OpenStack services: Nova, Glance and Cinder. They now have the ability to optimize each steps of the lifecycle of an instance/volume by simply cloning volumes instead of re-downloading a whole image to finally end up in the same backend the original image was stored in.

While this is cool for Ceph, other backends don't have this luxury and we are stucked in this "sorry state".


OpenStack-dev mailing list

Reply via email to