Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases (especially 
when dealing
with transferring between different formats, like raw to qcow2, etc).  This 
builds on what
Christopher was saying -- there's actually a reason why we have code for each 
driver.  While
having a common image copying library would be nice, I think a better way to do 
it would be to
have some sort of library composed of building blocks, such that each driver 
could make use of
common functionality while still tailoring the operation to the quirks of the 
particular drivers.

Best Regards,
Solly Ross

----- Original Message -----
From: "Christopher Lefelhocz" <christopher.lefel...@rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes" <jaypi...@gmail.com> wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/environment needs.

>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.

If I understand correctly, we'll create some common library around this.
It would be good to understand the details a bit better.  I've thought a
bit about this issue.  The one area that I get stuck at is providing a
common set of downloads which work across drivers effectively.  Part of
the reason there are a bunch or random image transfers is historical, but
also because performance was already a problem.  Examples include:
transferring to compute first then copying to dom0 causing performance
issues, needs in some drivers to download image completely to validate
prior to putting in place, etc.

It may be easy to say we'll push most of this to the dom0, but I know for
Xen our python stack is somewhat limited so that may be an issue.

By the way we've been working on proposing a simpler image pre caching
system/strategy.  It focuses specifically on the image caching portion of
this discussion.  For those interested, see the nova-spec
https://review.openstack.org/#/c/85792.  We'd like to leverage whatever
optimized image download strategy is available.

Christopher 


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to