Re: [openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?
Hi Kun, On 09 Jun 2015, at 05:34, Kun Feng fengku...@gmail.com wrote: Hi all, I'm using ceph as storage backend for Nova and Glance, and merged the rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones of a image, I have some concerns about this: 1. Since hundreds of vm disks based on one base file, is there any performance problems that IOs are loaded on this one paticular base file? This may be an issue but as Clint mentioned you’ll get reads served from OSD memory anyway, so this should be expectable. 2. Is it possible that the data of base file is damaged or PG/OSD containing data of this base file out of service, resulting as all the VMs based on that base file malfunctioned? This assumption is correct, if the parents gets a corrupted block that hasn’t been written to the clone yet, your clone will get the corrupted block on a write request. If so, flattening Copy-on-write clones may do some help. Is it necessary to do it? People have expressed the concern, but no one has ever seen such thing happening (as far as I know), so it’s really up to you to flatten the clone. There is a patch that needs to be reworked that will allow to flatten all the clone after their creation in Nova. Hopefully this will get into Liberty. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Cheers. Sébastien Han Senior Cloud Architect Always give 100%. Unless you're giving blood. Mail: s...@redhat.com Address: 11 bis, rue Roquépine - 75008 Paris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?
Hi all, I'm using ceph as storage backend for Nova and Glance, and merged the rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones of a image, I have some concerns about this: 1. Since hundreds of vm disks based on one base file, is there any performance problems that IOs are loaded on this one paticular base file? 2. Is it possible that the data of base file is damaged or PG/OSD containing data of this base file out of service, resulting as all the VMs based on that base file malfunctioned? If so, flattening Copy-on-write clones may do some help. Is it necessary to do it? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?
Excerpts from Kun Feng's message of 2015-06-08 20:34:51 -0700: Hi all, I'm using ceph as storage backend for Nova and Glance, and merged the rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones of a image, I have some concerns about this: 1. Since hundreds of vm disks based on one base file, is there any performance problems that IOs are loaded on this one paticular base file? Unless you have no ram available to VFS cache on the OSD's, this is fine. Blocks will be evenly spread to each OSD, and since these are likely to be super popular blocks, they'll likely all be served from RAM most of the time. 2. Is it possible that the data of base file is damaged or PG/OSD containing data of this base file out of service, resulting as all the VMs based on that base file malfunctioned? You probably will want to read this and see if that answers your question: http://ceph.com/docs/master/architecture/#data-consistency __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev