Re: [one-users] Ceph and thin provision
OK, Perfect. I think we missed the original format 2 support from Bill's first contribution. Being able to pick either format would be perfect We can schedule this for 4.6 I've filled a bug to track the progress: http://dev.opennebula.org/issues/2568 Cheers Ruben On Thu, Dec 12, 2013 at 3:56 PM, Campbell, Bill bcampb...@axcess-financial.com wrote: Yes, Dumpling supports format 2 images (I think Bobtail 0.56 was the first release that did). I'll be submitting my modified driver to the development team for inclusion/modification (ideally we should be able to select which format we want to use, so further modifications would be necessary) and hopefully it would be included in the next version. In the interim, I can share with you the drivers we are using, but be advised this would be UNSUPPORTED by the OpenNebula development/support team. It has been working rather well for us though. -- *From: *Kenneth kenn...@apolloglobal.net *To: *Bill Campbell bcampb...@axcess-financial.com *Cc: *users@lists.opennebula.org *Sent: *Thursday, December 12, 2013 9:29:49 AM *Subject: *Re: [one-users] Ceph and thin provision This all is good news. And I think this will solve my problem of a bit slow (a few minutes) of deploying a VM, that is cloning is really time consuming. Although I really like this RBD format 2, I'm not quite adept yet on how to implement it in nebula. And my ceph version is dumpling 0.67, does it support rbd format 2? If you have any docs, I'd greatly appreciate it. Or rather I'm willing to wait a little longer, maybe on the next release of nebula(?), to make rbd format 2 to be the default format? --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 09:48 PM, Campbell, Bill wrote: Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick provisioning, where essentially the following happens: Snapshot of Image created -- Snapshot protected from deletion -- Clone image created from snapshot The protected snapshot acts as a base image for the clone, where only the additional data is stored in the clone. See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering For our environment here I have modified the included datastore/tm drivers for Ceph to take advantage of these format 2 images/layering for Non-Persistent images. It works rather well, and all image functions work appropriately for non-persistent images (save as, etc.). One note/requirement is to be using a newer Ceph release (recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there were some bugs in older releases, but the versions from Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement prior to the 4.0 release, but the simple format 1 images are the default currently for OpenNebula. I think this would be a good question for the developers. Would creating the option for Format 2 images (either in the image template as a parameter or on the Datastore as a configuration attribute) and then developing the DS/TM drivers further to accommodate this option be worth the effort? I can see use cases for both (separate images vs. cloned images having to rely on the base image), but cloned images are WAY faster to deploy. I have the basic code for format 2 images, I think the logic for looking up the parameter/attribute and then applying appropriate action should be rather simple. Could collaborate/share if you'd like. -- *From: *Kenneth kenn...@apolloglobal.net *To: *users@lists.opennebula.org *Sent: *Thursday, December 12, 2013 6:11:15 AM *Subject: *Re: [one-users] Ceph and thin provision Yes, that is possible. But as I said, all my images were all preallocated as I haven't created any image from sunstone. --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 06:25 PM, Michael wrote: This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org *NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.* *NOTICE: Protect the information
Re: [one-users] Ceph and thin provision
In several virtualization systems you can have a virtual disk drive: -thick, so a thick disk of 100gb uses 100gb of space; -thin, so a thin disk of 100gb uses 0gb when empty and starts using space when the virtual machine fills it. So I can have a real hdd of 250gb with inside ten virtual thin disks of 1000gb each, if they are almost empty. I have checked again and ceph rbd are thin. BTW: I thank you for you explanation of persistent/not persistent, I was not able to find it in docs. Can you explain me also what a volatile disk is? A not persistent image is writeable? When you reboot a vm with a not persistent image you lose all datda written to it? Thanks again, Mario 2013/12/12 Kenneth kenn...@apolloglobal.net Hi, Can you elaborate more on what you want to achieve? If you have a 100GB image and it is set to persistent, you can instantiate that image immediately and deploy/live migrate it to any nebula node. Only one running instance of VM of this image is allowed. If it is a 100GB non persistent image, you'll have to wait for ceph to create a copy of it once you deploy it. But you can use this image multiple times simutaneously. --- Thanks, Kenneth Apollo Global Corp. On 12/11/2013 07:28 PM, Mario Giammarco wrote: Hello, I am using ceph with opennebula. I have created a 100gb disk image and I do not understand if it is thin or thick. I hope I can have thin provision. Thanks, Mario ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
I haven't tested much on non-persistent image as I have no use on them unless on experiments. Also, I haven't tried any volatile image, sorry. _A not persistent image is writeable?_ Short answer: NO Long answer: Yes, sort of. When you instantiate a non persistent image, nebula create a another disk in the background temporarily. You can check that on when you issue rbd ls -p one. You'll see something like this. one-34 --- this is the non persistent image disk one-34-73-0 this is the temporary clone of the disk when you instantiate a VM one-34-80-0 - another VM which uses the non persistent image one-34 This is why you can instantiate two or more VMs using a non-persistent image. If I'm not mistaken, the temporary disk will be destoyed once you shutdown the VM from nebula sunstone. But as long as the VM is running, the data is there. You can even reboot the VM with non-persistent disk and still have data. You lose the data once Nebula destroys VM disk, that is, when you SHUTDOWN or DELETE the VM from nebula sunstone. As for thick and thin provision, all of my images in ceph are thick, because my base image is 25 GB disk from a KVM template and then I imported it in ceph (it was converted from qcow2 to rbd). It consumes whole 25GB on my ceph storage. I just clone that template image every time I deploy a new VM. I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 04:52 PM, Mario Giammarco wrote: In several virtualization systems you can have a virtual disk drive: -thick, so a thick disk of 100gb uses 100gb of space; -thin, so a thin disk of 100gb uses 0gb when empty and starts using space when the virtual machine fills it. So I can have a real hdd of 250gb with inside ten virtual thin disks of 1000gb each, if they are almost empty. I have checked again and ceph rbd are thin. BTW: I thank you for you explanation of persistent/not persistent, I was not able to find it in docs. Can you explain me also what a volatile disk is? A not persistent image is writeable? When you reboot a vm with a not persistent image you lose all datda written to it? Thanks again, Mario 2013/12/12 Kenneth kenn...@apolloglobal.net Hi, Can you elaborate more on what you want to achieve? If you have a 100GB image and it is set to persistent, you can instantiate that image immediately and deploy/live migrate it to any nebula node. Only one running instance of VM of this image is allowed. If it is a 100GB non persistent image, you'll have to wait for ceph to create a copy of it once you deploy it. But you can use this image multiple times simutaneously. --- Thanks, Kenneth Apollo Global Corp. On 12/11/2013 07:28 PM, Mario Giammarco wrote: Hello, I am using ceph with opennebula. I have created a 100gb disk image and I do not understand if it is thin or thick. I hope I can have thin provision. Thanks, Mario ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1] Links: -- [1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
Yes, that is possible. But as I said, all my images were all preallocated as I haven't created any image from sunstone. --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 06:25 PM, Michael wrote: This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1] Links: -- [1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick provisioning, where essentially the following happens: Snapshot of Image created -- Snapshot protected from deletion -- Clone image created from snapshot The protected snapshot acts as a base image for the clone, where only the additional data is stored in the clone. See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering For our environment here I have modified the included datastore/tm drivers for Ceph to take advantage of these format 2 images/layering for Non-Persistent images. It works rather well, and all image functions work appropriately for non-persistent images (save as, etc.). One note/requirement is to be using a newer Ceph release (recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there were some bugs in older releases, but the versions from Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement prior to the 4.0 release, but the simple format 1 images are the default currently for OpenNebula. I think this would be a good question for the developers. Would creating the option for Format 2 images (either in the image template as a parameter or on the Datastore as a configuration attribute) and then developing the DS/TM drivers further to accommodate this option be worth the effort? I can see use cases for both (separate images vs. cloned images having to rely on the base image), but cloned images are WAY faster to deploy. I have the basic code for format 2 images, I think the logic for looking up the parameter/attribute and then applying appropriate action should be rather simple. Could collaborate/share if you'd like. - Original Message - From: Kenneth kenn...@apolloglobal.net To: users@lists.opennebula.org Sent: Thursday, December 12, 2013 6:11:15 AM Subject: Re: [one-users] Ceph and thin provision Yes, that is possible. But as I said, all my images were all preallocated as I haven't created any image from sunstone. --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 06:25 PM, Michael wrote: This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: blockquote I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org /blockquote ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
Kenneth, The allocation of images consuming the total image size looks to be a bug in Ceph: http://tracker.ceph.com/issues/6257 They've identified, but doesn't look like there's been any movement on it since the bug was opened. - Original Message - From: Kenneth kenn...@apolloglobal.net To: Mario Giammarco mgiamma...@gmail.com Cc: users@lists.opennebula.org Sent: Thursday, December 12, 2013 4:11:17 AM Subject: Re: [one-users] Ceph and thin provision I haven't tested much on non-persistent image as I have no use on them unless on experiments. Also, I haven't tried any volatile image, sorry. A not persistent image is writeable? Short answer: NO Long answer: Yes, sort of. When you instantiate a non persistent image, nebula create a another disk in the background temporarily. You can check that on when you issue rbd ls -p one. You'll see something like this. one-34 --- this is the non persistent image disk one-34-73-0 this is the temporary clone of the disk when you instantiate a VM one-34-80-0 - another VM which uses the non persistent image one-34 This is why you can instantiate two or more VMs using a non-persistent image. If I'm not mistaken, the temporary disk will be destoyed once you shutdown the VM from nebula sunstone. But as long as the VM is running, the data is there. You can even reboot the VM with non-persistent disk and still have data. You lose the data once Nebula destroys VM disk, that is, when you SHUTDOWN or DELETE the VM from nebula sunstone. As for thick and thin provision, all of my images in ceph are thick, because my base image is 25 GB disk from a KVM template and then I imported it in ceph (it was converted from qcow2 to rbd). It consumes whole 25GB on my ceph storage. I just clone that template image every time I deploy a new VM. I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 04:52 PM, Mario Giammarco wrote: In several virtualization systems you can have a virtual disk drive: -thick, so a thick disk of 100gb uses 100gb of space; -thin, so a thin disk of 100gb uses 0gb when empty and starts using space when the virtual machine fills it. So I can have a real hdd of 250gb with inside ten virtual thin disks of 1000gb each, if they are almost empty. I have checked again and ceph rbd are thin. BTW: I thank you for you explanation of persistent/not persistent, I was not able to find it in docs. Can you explain me also what a volatile disk is? A not persistent image is writeable? When you reboot a vm with a not persistent image you lose all datda written to it? Thanks again, Mario 2013/12/12 Kenneth kenn...@apolloglobal.net blockquote Hi, Can you elaborate more on what you want to achieve? If you have a 100GB image and it is set to persistent, you can instantiate that image immediately and deploy/live migrate it to any nebula node. Only one running instance of VM of this image is allowed. If it is a 100GB non persistent image, you'll have to wait for ceph to create a copy of it once you deploy it. But you can use this image multiple times simutaneously. --- Thanks, Kenneth Apollo Global Corp. On 12/11/2013 07:28 PM, Mario Giammarco wrote: blockquote Hello, I am using ceph with opennebula. I have created a 100gb disk image and I do not understand if it is thin or thick. I hope I can have thin provision. Thanks, Mario ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org /blockquote /blockquote ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
This all is good news. And I think this will solve my problem of a bit slow (a few minutes) of deploying a VM, that is cloning is really time consuming. Although I really like this RBD format 2, I'm not quite adept yet on how to implement it in nebula. And my ceph version is dumpling 0.67, does it support rbd format 2? If you have any docs, I'd greatly appreciate it. Or rather I'm willing to wait a little longer, maybe on the next release of nebula(?), to make rbd format 2 to be the default format? --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 09:48 PM, Campbell, Bill wrote: Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick provisioning, where essentially the following happens: Snapshot of Image created -- Snapshot protected from deletion -- Clone image created from snapshot The protected snapshot acts as a base image for the clone, where only the additional data is stored in the clone. See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering [2] For our environment here I have modified the included datastore/tm drivers for Ceph to take advantage of these format 2 images/layering for Non-Persistent images. It works rather well, and all image functions work appropriately for non-persistent images (save as, etc.). One note/requirement is to be using a newer Ceph release (recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there were some bugs in older releases, but the versions from Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement prior to the 4.0 release, but the simple format 1 images are the default currently for OpenNebula. I think this would be a good question for the developers. Would creating the option for Format 2 images (either in the image template as a parameter or on the Datastore as a configuration attribute) and then developing the DS/TM drivers further to accommodate this option be worth the effort? I can see use cases for both (separate images vs. cloned images having to rely on the base image), but cloned images are WAY faster to deploy. I have the basic code for format 2 images, I think the logic for looking up the parameter/attribute and then applying appropriate action should be rather simple. Could collaborate/share if you'd like. - FROM: Kenneth kenn...@apolloglobal.net TO: users@lists.opennebula.org SENT: Thursday, December 12, 2013 6:11:15 AM SUBJECT: Re: [one-users] Ceph and thin provision Yes, that is possible. But as I said, all my images were all preallocated as I haven't created any image from sunstone. --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 06:25 PM, Michael wrote: This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1] ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org NOTICE: PROTECT THE INFORMATION IN THIS MESSAGE IN ACCORDANCE WITH THE COMPANY'S SECURITY POLICIES. IF YOU RECEIVED THIS MESSAGE IN ERROR, IMMEDIATELY NOTIFY THE SENDER AND DESTROY ALL COPIES. Links: -- [1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [2] http://ceph.com/docs/master/rbd/rbd-snapshot/#layering ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
Yes, Dumpling supports format 2 images (I think Bobtail 0.56 was the first release that did). I'll be submitting my modified driver to the development team for inclusion/modification (ideally we should be able to select which format we want to use, so further modifications would be necessary) and hopefully it would be included in the next version. In the interim, I can share with you the drivers we are using, but be advised this would be UNSUPPORTED by the OpenNebula development/support team. It has been working rather well for us though. - Original Message - From: Kenneth kenn...@apolloglobal.net To: Bill Campbell bcampb...@axcess-financial.com Cc: users@lists.opennebula.org Sent: Thursday, December 12, 2013 9:29:49 AM Subject: Re: [one-users] Ceph and thin provision This all is good news. And I think this will solve my problem of a bit slow (a few minutes) of deploying a VM, that is cloning is really time consuming. Although I really like this RBD format 2, I'm not quite adept yet on how to implement it in nebula. And my ceph version is dumpling 0.67, does it support rbd format 2? If you have any docs, I'd greatly appreciate it. Or rather I'm willing to wait a little longer, maybe on the next release of nebula(?), to make rbd format 2 to be the default format? --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 09:48 PM, Campbell, Bill wrote: Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick provisioning, where essentially the following happens: Snapshot of Image created -- Snapshot protected from deletion -- Clone image created from snapshot The protected snapshot acts as a base image for the clone, where only the additional data is stored in the clone. See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering For our environment here I have modified the included datastore/tm drivers for Ceph to take advantage of these format 2 images/layering for Non-Persistent images. It works rather well, and all image functions work appropriately for non-persistent images (save as, etc.). One note/requirement is to be using a newer Ceph release (recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there were some bugs in older releases, but the versions from Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement prior to the 4.0 release, but the simple format 1 images are the default currently for OpenNebula. I think this would be a good question for the developers. Would creating the option for Format 2 images (either in the image template as a parameter or on the Datastore as a configuration attribute) and then developing the DS/TM drivers further to accommodate this option be worth the effort? I can see use cases for both (separate images vs. cloned images having to rely on the base image), but cloned images are WAY faster to deploy. I have the basic code for format 2 images, I think the logic for looking up the parameter/attribute and then applying appropriate action should be rather simple. Could collaborate/share if you'd like. - Original Message - From: Kenneth kenn...@apolloglobal.net To: users@lists.opennebula.org Sent: Thursday, December 12, 2013 6:11:15 AM Subject: Re: [one-users] Ceph and thin provision Yes, that is possible. But as I said, all my images were all preallocated as I haven't created any image from sunstone. --- Thanks, Kenneth Apollo Global Corp. On 12/12/2013 06:25 PM, Michael wrote: blockquote This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB data reported by Ceph (760G after replication). All of these Ceph images were created through the Opennebula Sunstone template GUI. -Michael On 12/12/2013 09:11, Kenneth wrote: blockquote I haven't tried creating a thin or thick provision in ceph rbd from scratch. So basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph storage since ceph duplicates the disks by default). ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org /blockquote ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies. /blockquote NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Ceph and thin provision
Hello, I am using ceph with opennebula. I have created a 100gb disk image and I do not understand if it is thin or thick. I hope I can have thin provision. Thanks, Mario ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ceph and thin provision
Hi, Can you elaborate more on what you want to achieve? If you have a 100GB image and it is set to persistent, you can instantiate that image immediately and deploy/live migrate it to any nebula node. Only one running instance of VM of this image is allowed. If it is a 100GB non persistent image, you'll have to wait for ceph to create a copy of it once you deploy it. But you can use this image multiple times simutaneously. --- Thanks, Kenneth Apollo Global Corp. On 12/11/2013 07:28 PM, Mario Giammarco wrote: Hello, I am using ceph with opennebula. I have created a 100gb disk image and I do not understand if it is thin or thick. I hope I can have thin provision. Thanks, Mario ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1] Links: -- [1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org