On 6/1/2018 12:44 AM, Chris Friesen wrote:
> On 05/31/2018 04:14 PM, Curt Moore wrote:
>> The challenge is that transferring the Glance image transfer is
>> _glacially slow_ when using the Glance HTTP API (~30 min for a 50GB
>> Windows image (It’s Windows, it’s huge with all of the necessary
>> tools installed)). If libvirt can instead perform an RBD export on
>> the image using the image download functionality, it is able to
>> download the same image in ~30 sec.
> This seems oddly slow. I just downloaded a 1.6 GB image from glance in
> slightly under 10 seconds. That would map to about 5 minutes for a
> 50GB image.
Agreed.  There's nothing really special about the Glance API setup, we
have multiple load balanced instances behind HAProxy.  However, in our
use case, we are very sensitive to node spin-up time so anything we can
do to reduce this time is desired.  If a VM lands on a compute node
where the image isn't yet locally cached, paying an additional 5 min
penalty is undesired.
>> We could look at attaching an additional ephemeral disk to the
>> instance and have cloudbase-init use it as the pagefile but it
>> appears that if libvirt is using rbd for its images_type, _all_ disks
>> must then come from Ceph, there is no way at present to allow the VM
>> image to run from Ceph and have an ephemeral disk mapped in from
>> node-local storage. Even still, this would have the effect of
>> "wasting" Ceph IOPS for the VM disk itself which could be better used
>> for other purposes. Based on what I have explained about our use
>> case, is there a better/different way to accomplish the same goal
>> without using the deprecated image download functionality? If not,
>> can we work to "un-deprecate" the download extension point? Should I
>> work to get the code for this RBD download into the upstream repository?
> Have you considered using compute nodes configured for local storage
> but then use boot-from-volume with cinder and glance both using ceph?
> I *think* there's an optimization there such that the volume creation
> is fast. Assuming the volume creation is indeed fast, in this scenario
> you could then have a local ephemeral/swap disk for your pagefile.
> You'd still have your VM root disks on ceph though.
Understood. Booting directly from a Cinder volume would work, but as you
mention, we'd still have the VM root disks in Ceph, using the expensive
Ceph SSD IOPS for no good reason.  I'm trying to get the best of both
worlds by keeping the Glance images in Ceph and also keeping all VM I/O
local to the compute node.

-Curt

________________________________

CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of 
the intended recipient(s) and contain information that may be Garmin 
confidential and/or Garmin legally privileged. If you have received this email 
in error, please notify the sender by reply email and delete the message. Any 
disclosure, copying, distribution or use of this communication (including 
attachments) by someone other than the intended recipient is prohibited. Thank 
you.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to