Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-30 Thread lihuiba
Sorry for being late. I was busy with something else these days.


It'll be great to have a dedicated image transferring library that provides 
both pre-copying
and zero-copying sematics, and we are glad to have  VMThunder integrated in it. 
Before
that library is done, however, we plan to propose a blue print that solely 
focus on integrate
VMThunder into Open Stack, as a plug-in of course. Then we can move VMThunder 
into
the newly created transferring library with a refactory process.


Does this plan make sense?






BTW, I'll not be able to goto the summit. It's too far away. Pity.




At 2014-04-28 11:01:13,Sheng Bo Hou sb...@cn.ibm.com wrote:
Jay, Huiba, Chris, Solly, Zhiyan, and everybody else,

I am so excited that two of the proposals: Image Upload 
Plugin(http://summit.openstack.org/cfp/details/353) and Data transfer service 
Plugin(http://summit.openstack.org/cfp/details/352) have been merged together 
and scheduled in the coming design summit. If you show up in Atlanta, please 
come this 
session(http://junodesignsummit.sched.org/event/c00119362c07e4cb203d1c4053add187)
 and start our discussion, on Wednesday, May 14 • 11:50am - 12:30pm.

I will propose a common image transfer library for all the OpenStack projects 
to to upload and download the images. If it is approved, with this library, 
Huiba, you can feel free to implement the transfer protocols you like.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



| Sheng Bo Hou/China/IBM@IBMCN

2014/04/27 22:33

|
Please respond to
OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org
|

|
|
To
| OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org, |
|
cc
| |
|
Subject
| Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder |


| | |

|



I have done a little test for the image download and upload. I created an API 
for the image access, containing copyFrom and sendTo. I moved the image 
download and upload code from XenApi into the implementation for Http with some 
modifications, and the code worked for libvirt as well.
copyFrom means to download the image and return the image data, and different 
hypervisors can choose to save it in a file or import it to the datastore; 
sendTo is used to upload the image and the image data is passed in as a 
parameter.

I also did an investigation about how each hypervisor is doing the image upload 
and download.

For the download:
libvirt, hyper-v and baremetal use the code image_service.download to download 
the image and save it into a file.
vmwareapi uses the code image_service.download to download the image and import 
it into the datastore.
XenAPi uses image_service.download to download the image for VHD image.

For the upload:
They use image_service.upload to upload the image.

I think we can conclude that it is possible to have a common image transfer 
library with different implementations for different protocols.
This is a small demo code for the library: 
https://review.openstack.org/#/c/90601/(Jay, is it close to the library as you 
mentioned?). I just replaced the upload and download part with the http 
implementation for the imageapi and it worked fine.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



| Solly Ross sr...@redhat.com

2014/04/25 01:46


|
Please respond to
OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org
|

|
|
To
| OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, |
|
cc
| |
|
Subject
| Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a 
number of vms via VMThunder |



| | |

|




Something to be aware of when planing an image transfer library is that 
individual drivers
might have optimized support for image transfer in certain cases (especially 
when dealing
with transferring between different formats, like raw to qcow2, etc).  This 
builds on what
Christopher was saying -- there's actually a reason why we have code for each 
driver.  While
having a common image copying library would be nice, I think a better way to do 
it would be to
have some sort of 

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-26 Thread lihuiba
Hmm, I totally see the value of doing this. Not sure that there could be
the same kinds of liveness guarantees with non-shared-storage, but I
am certainly happy to see a proof of concept in this area! :)
By liveness, if you mean down time of migration, our current results
show that liveness is guaranteed with non-shared-storage. Some preliminary
work has been published in a conference SOSE14, which can be found at
http://www.vmthunder.org/dlsm_sose2014_final.pdf   And we have made
some improvements to it, and the work is still under development. We
are planning to write a new paper and submit it to another conference in 
this summer.




 how about zero-copying?

It would be an implementation detail within nova.image.api.copy()
function (and the aforementioned image bits mover library) :)

IMHO, (pre-)copying and zero-copying are different in nature, and it's
not necessary to mask such difference by a single interface. With 2
sets of interfaces, programmers (users of copying service) will be
reminded of the cost of (pre-)copying, or the risk of runtime network 
congestion of zero-copying.



At 2014-04-23 23:02:29,Jay Pipes jaypi...@gmail.com wrote:
On Wed, 2014-04-23 at 13:56 +0800, lihuiba wrote:
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 With a good zero-copy transfer lib, live migration support can be 
 extended to non-shared storage, or cross-datacenter. It's a kind of
 value.

Hmm, I totally see the value of doing this. Not sure that there could be
the same kinds of liveness guarantees with non-shared-storage, but I
am certainly happy to see a proof of concept in this area! :)

 task = image_api.copy(from_path_or_uri, to_path_or_uri)
 # do some other work
 copy_task_result = task.wait()
 +1  looks cool!
 how about zero-copying?

It would be an implementation detail within nova.image.api.copy()
function (and the aforementioned image bits mover library) :)

The key here is to leak as little implementation detail out of the
nova.image.api module

Best,
-jay

 At 2014-04-23 07:21:27,Jay Pipes jaypi...@gmail.com wrote:
 Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.
 
 On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
  I actually support the idea Huiba has proposed, and I am thinking of
  how to optimize the large data transfer(for example, 100G in a short
  time) as well. 
  I registered two blueprints in nova-specs, one is for an image upload
  plug-in to upload the image to
  glance(https://review.openstack.org/#/c/84671/), the other is a data
  transfer plug-in(https://review.openstack.org/#/c/87207/) for data
  migration among nova nodes. I would like to see other transfer
  protocols, like FTP, bitTorrent, p2p, etc, implemented for data
  transfer in OpenStack besides HTTP. 
  
  Data transfer may have many use cases. I summarize them into two
  catalogs. Please feel free to comment on it. 
  1. The machines are located in one network, e.g. one domain, one
  cluster, etc. The characteristic is the machines can access each other
  directly via the IP addresses(VPN is beyond consideration). In this
  case, data can be transferred via iSCSI, NFS, and definitive zero-copy
  as Zhiyan mentioned. 
  2. The machines are located in different networks, e.g. two data
  centers, two firewalls, etc. The characteristic is the machines can
  not access each other directly via the IP addresses(VPN is beyond
  consideration). The machines are isolated, so they can not be
  connected with iSCSI, NFS, etc. In this case, data have to go via the
  protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
  can work for this case. Zhiyan, please help me with this doubt. 
  
  I guess for data transfer, including image downloading, image
  uploading, live migration, etc, OpenStack needs to taken into account
  the above two catalogs for data transfer.
 
 For live migration, we use shared storage so I don't think it's quite
 the same as getting/putting image bits from/to arbitrary locations.
 
   It is hard to say that one protocol is better than another, and one
  approach prevails another(BitTorrent is very cool, but if there is
  only one source and only one target, it would not be that faster than
  a direct FTP). The key is the use
  case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).
 
 Right, a good solution would allow for some flexibility via multiple
 transfer drivers.
 
  Jay Pipes has suggested we figure out a blueprint for a separate
  library dedicated to the data(byte) transfer, which may be put in oslo
  and used by any projects in need (Hoping Jay can come in:-)). Huiba,
  Zhiyan, everyone else, do you think we come up with a blueprint about
  the data transfer in oslo can work?
 
 Yes, so I believe the most appropriate solution is to create a library
 -- in oslo or a standalone library like taskflow

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-23 Thread lihuiba
For live migration, we use shared storage so I don't think it's quite
the same as getting/putting image bits from/to arbitrary locations.With a good 
zero-copy transfer lib, live migration support can be extended to non-shared 
storage, or cross-datacenter. It's a kind ofvalue.



task = image_api.copy(from_path_or_uri, to_path_or_uri)
# do some other work
copy_task_result = task.wait()

+1  looks cool!
how about zero-copying?






At 2014-04-23 07:21:27,Jay Pipes jaypi...@gmail.com wrote:
Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.

On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
 I actually support the idea Huiba has proposed, and I am thinking of
 how to optimize the large data transfer(for example, 100G in a short
 time) as well. 
 I registered two blueprints in nova-specs, one is for an image upload
 plug-in to upload the image to
 glance(https://review.openstack.org/#/c/84671/), the other is a data
 transfer plug-in(https://review.openstack.org/#/c/87207/) for data
 migration among nova nodes. I would like to see other transfer
 protocols, like FTP, bitTorrent, p2p, etc, implemented for data
 transfer in OpenStack besides HTTP. 
 
 Data transfer may have many use cases. I summarize them into two
 catalogs. Please feel free to comment on it. 
 1. The machines are located in one network, e.g. one domain, one
 cluster, etc. The characteristic is the machines can access each other
 directly via the IP addresses(VPN is beyond consideration). In this
 case, data can be transferred via iSCSI, NFS, and definitive zero-copy
 as Zhiyan mentioned. 
 2. The machines are located in different networks, e.g. two data
 centers, two firewalls, etc. The characteristic is the machines can
 not access each other directly via the IP addresses(VPN is beyond
 consideration). The machines are isolated, so they can not be
 connected with iSCSI, NFS, etc. In this case, data have to go via the
 protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
 can work for this case. Zhiyan, please help me with this doubt. 
 
 I guess for data transfer, including image downloading, image
 uploading, live migration, etc, OpenStack needs to taken into account
 the above two catalogs for data transfer.

For live migration, we use shared storage so I don't think it's quite
the same as getting/putting image bits from/to arbitrary locations.

  It is hard to say that one protocol is better than another, and one
 approach prevails another(BitTorrent is very cool, but if there is
 only one source and only one target, it would not be that faster than
 a direct FTP). The key is the use
 case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).

Right, a good solution would allow for some flexibility via multiple
transfer drivers.

 Jay Pipes has suggested we figure out a blueprint for a separate
 library dedicated to the data(byte) transfer, which may be put in oslo
 and used by any projects in need (Hoping Jay can come in:-)). Huiba,
 Zhiyan, everyone else, do you think we come up with a blueprint about
 the data transfer in oslo can work?

Yes, so I believe the most appropriate solution is to create a library
-- in oslo or a standalone library like taskflow -- that would offer a
simple byte streaming library that could be used by nova.image to expose
a neat and clean task-based API.

Right now, there is a bunch of random image transfer code spread
throughout nova.image and in each of the virt drivers there seems to be
different re-implementations of similar functionality. I propose we
clean all that up and have nova.image expose an API so that a virt
driver could do something like this:

from nova.image import api as image_api

...

task = image_api.copy(from_path_or_uri, to_path_or_uri)
# do some other work
copy_task_result = task.wait()

Within nova.image.api.copy(), we would use the aforementioned transfer
library to move the image bits from the source to the destination using
the most appropriate method.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






-- 

Yongquan Fu
PhD, Assistant Professor,
National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.Yes, in this situation, the 
problem lies in the backend storage, so no otherprotocol will perform better. 
However, P2P transferring will greatly reduceworkload on the backend storage, 
so as to increase responsiveness.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.
Nova's image caching is file level, while VMThunder's is block-level. And
VMThunder is for working in conjunction with Cinder, not Glance. VMThunder
currently uses facebook's flashcache to realize caching, and dm-cache,
bcache are also options in the future.



I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

Yes, on-demand transferring is what you mean by zero-copy, and caching
is something close to CoR. In fact, we are working on a kernel module called
foolcache that realize a true CoR. See https://github.com/lihuiba/dm-foolcache.







National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073

At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org



 Hello Yongquan Fu,

 My thoughts:

 1. Currently Nova has already supported image caching mechanism. It
 could caches the image on compute host which VM had provisioning from
 it before, and next provisioning (boot same image) doesn't need to
 transfer it again only if cache-manger clear it up.
 2. P2P transferring and prefacing is something that still based on
 copy mechanism, IMHO, zero-copy approach is better, even
 transferring/prefacing could be optimized by such approach. (I have
 not check on-demand transferring of VMThunder, but it is a kind of
 transferring as well, at last from its literal meaning).
 And btw, IMO, we have two ways can go follow zero-copy idea:
 a. when Nova

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
glance-bittorrent-delivery and VMThunder have similar goals  fast 
provisioning
of large amount of VMs, and they share some ideas like P2P transferring, but 
they 
go with different techniques.


VMThunder only downloads data blocks that are really used by VMs, so as to 
reduce bandwith and time required to provision. We have experiments showing
that only a few hundred MB of data is needed to boot an mainstream OS like
CentOS 6.x, Ubuntu 12.04, Windows 2008, etc., while the images are GBs or
even tens of GBs large.



National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073



在 2014-04-17 19:06:27,Jesse Pretorius jesse.pretor...@gmail.com 写道:



This whole discussion reminded me of this:


https://blueprints.launchpad.net/glance/+spec/glance-bittorrent-delivery
http://tropicaldevel.wordpress.com/2013/01/11/an-image-transfers-service-for-openstack/


The general idea was that Glance would be able to serve images through 
torrents, enabling the capability for compute hosts to participate in image 
delivery. Well, the second part was where I thought it was going - I'm not sure 
if that was the intention.


It didn't seem to go anywhere, but I thought it was a nifty idea.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.
Network traffic control could help. The point is to ensure no instance
is starved to death. Traffic control can be done with tc.





btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.
That sounds interesting. Is there some  more materials?



At 2014-04-18 06:05:23,Zhi Yan Liu lzy@gmail.com wrote:
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no other

 protocol will perform better. However, P2P transferring will greatly reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level. And

 VMThunder is for working in conjunction with Cinder, not Glance. VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and caching
 is something close to CoR. In fact, we are working on a kernel module called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b
 of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate
 of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.


btw, I believe the case/numbers is not true as well, since remote
image bits could be loaded on-demand instead of load them all on boot
stage.

zhiyan

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!)
 can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
 - _some_ libvirt drivers already have image caching. I am unsure if
all of them do, I'd have to check.
 '$instances_path/_base' is used to cache images downloaded fromglance, in 
file-level. While VMThunder employs find-grained block-level cacheing, for 
volumes served by cinder.

 - we already have blueprints for better support of glance multiple
image locations, it might be better to extend that work than to do
something completely separate.
Is there a cinder multiple volume locations? We are considering to
support something like that.

 - the xen driver already does bittorrent image delivery IIRC, you
could take a look at how that do that.
We are trying to do bittorrent image delivery for libvirt, too.

 - pre-caching images has been proposed for libvirt for a long time,
but never implemented. I think that's definitely something of interest
to deployers.

What is pre-caching? Deploying images to compute nodes before they 
are used?



Huiba Li
National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073





At 2014-04-18 05:19:23,Michael Still mi...@stillhq.com wrote:
If you'd like to have a go at implementing this in nova's Juno
release, then you need to create a new-style blueprint in the
nova-specs repository. You can find more details about that process at
https://wiki.openstack.org/wiki/Blueprints#Nova

Some initial thoughts though, some of which have already been brought up:

 - _some_ libvirt drivers already have image caching. I am unsure if
all of them do, I'd have to check.

 - we already have blueprints for better support of glance multiple
image locations, it might be better to extend that work than to do
something completely separate.

 - the xen driver already does bittorrent image delivery IIRC, you
could take a look at how that do that.

 - pre-caching images has been proposed for libvirt for a long time,
but never implemented. I think that's definitely something of interest
to deployers.

Cheers,
Michael

On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman