Hello Yongquan Fu, My thoughts:
1. Currently Nova has already supported image caching mechanism. It could caches the image on compute host which VM had provisioning from it before, and next provisioning (boot same image) doesn't need to transfer it again only if cache-manger clear it up. 2. P2P transferring and prefacing is something that still based on copy mechanism, IMHO, zero-copy approach is better, even transferring/prefacing could be optimized by such approach. (I have not check "on-demand transferring" of VMThunder, but it is a kind of transferring as well, at last from its literal meaning). And btw, IMO, we have two ways can go follow zero-copy idea: a. when Nova and Glance use same backend storage, we could use storage special CoW/snapshot approach to prepare VM disk instead of copy/transferring image bits (through HTTP/network or local copy). b. without "unified" storage, we could attach volume/LUN to compute node from backend storage as a base image, then do such CoW/snapshot on it to prepare root/ephemeral disk of VM. This way just like boot-from-volume but different is that we do CoW/snapshot on Nova side instead of Cinder/storage side. For option #a, we have already got some progress: https://blueprints.launchpad.net/nova/+spec/image-multiple-location https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler Under #b approach, my former experience from our previous similar Cloud deployment (not OpenStack) was that: under 2 PC server storage nodes (general *local SAS disk*, without any storage backend) + 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500 VMs in a minute. For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing is one of optimized approach for image transferring valuably. zhiyan On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu <quanyo...@gmail.com> wrote: > > Dear all, > > > > We would like to present an extension to the vm-booting functionality of > Nova when a number of homogeneous vms need to be launched at the same time. > > > > The motivation for our work is to increase the speed of provisioning vms for > large-scale scientific computing and big data processing. In that case, we > often need to boot tens and hundreds virtual machine instances at the same > time. > > > Currently, under the Openstack, we found that creating a large number of > virtual machine instances is very time-consuming. The reason is the booting > procedure is a centralized operation that involve performance bottlenecks. > Before a virtual machine can be actually started, OpenStack either copy the > image file (swift) or attach the image volume (cinder) from storage server > to compute node via network. Booting a single VM need to read a large amount > of image data from the image storage server. So creating a large number of > virtual machine instances would cause a significant workload on the servers. > The servers become quite busy even unavailable during the deployment phase. > It would consume a very long time before the whole virtual machine cluster > useable. > > > > Our extension is based on our work on vmThunder, a novel mechanism > accelerating the deployment of large number virtual machine instances. It is > written in Python, can be integrated with OpenStack easily. VMThunder > addresses the problem described above by following improvements: on-demand > transferring (network attached storage), compute node caching, P2P > transferring and prefetching. VMThunder is a scalable and cost-effective > accelerator for bulk provisioning of virtual machines. > > > > We hope to receive your feedbacks. Any comments are extremely welcome. > Thanks in advance. > > > > PS: > > > > VMThunder enhanced nova blueprint: > https://blueprints.launchpad.net/nova/+spec/thunderboost > > VMThunder standalone project: https://launchpad.net/vmthunder; > > VMThunder prototype: https://github.com/lihuiba/VMThunder > > VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder > > VMThunder portal: http://www.vmthunder.org/ > > VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf > > > > Regards > > > > vmThunder development group > > PDL > > National University of Defense Technology > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev