On 15:06 Aug 11, Kuvaja, Erno wrote: > > -----Original Message----- > > From: Jay Pipes [mailto:jaypi...@gmail.com]
<snip> > > Having the image cache local to the compute nodes themselves gives the > > best performance overall, and with glance_store, means that glance-api isn't > > needed at all, and Glance can become just a metadata repository, which > > would be awesome, IMHO. > > You have any figures to back this up in scale? We've heard similar claims for > quite a while and as soon as people starts to actually look into how the > environments behaves, they quite quickly turn back. As you're not the first > one, I'd like to make the same request as to everyone before, show your data > to back this claim up! Until that it is just like you say it is, opinion.;) The claims I make with Cinder doing caching on its own versus just using Glance with rally with an 8G image: Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds http://thing.ee/x/cache_results/ Thanks to Patrick East for pulling these results together. Keep in mind, this is using a block storage backend that is completely separate from the OpenStack nodes. It's *not* using a local LVM all in one OpenStack contraption. This is important because even if you have Glance caching enabled, and there was no cache miss, you still have to dd the bits to the block device, which is still going over the network. Unless Glance is going to cache on the storage array itself, forget about it. Glance should be focusing on other issues, rather than trying to make copying image bits over the network and dd'ing to a block device faster. -- Mike Perez __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev