Kenneth, 
The allocation of images consuming the total image size looks to be a bug in 
Ceph: 

http://tracker.ceph.com/issues/6257 

They've identified, but doesn't look like there's been any movement on it since 
the bug was opened. 

----- Original Message -----

From: "Kenneth" <[email protected]> 
To: "Mario Giammarco" <[email protected]> 
Cc: [email protected] 
Sent: Thursday, December 12, 2013 4:11:17 AM 
Subject: Re: [one-users] Ceph and thin provision 



I haven't tested much on non-persistent image as I have no use on them unless 
on experiments. Also, I haven't tried any volatile image, sorry. 

A not persistent image is writeable? 

Short answer: NO 

Long answer: Yes, sort of. When you instantiate a non persistent image, nebula 
create a "another disk" in the background temporarily. You can check that on 
when you issue "rbd ls -p one". You'll see something like this. 

one-34 -------> this is the non persistent image disk 
one-34-73-0 --------> this is the "temporary clone" of the disk when you 
instantiate a VM 
one-34-80-0 ---------> another VM which uses the non persistent image one-34 

This is why you can instantiate two or more VMs using a non-persistent image. 
If I'm not mistaken, the temporary disk will be destoyed once you shutdown the 
VM from nebula sunstone. But as long as the VM is running, the data is there. 
You can even reboot the VM with non-persistent disk and still have data. You 
lose the data once Nebula destroys VM disk, that is, when you SHUTDOWN or 
DELETE the VM from nebula sunstone. 

As for thick and thin provision, all of my images in ceph are thick, because my 
base image is 25 GB disk from a KVM template and then I imported it in ceph (it 
was converted from qcow2 to rbd). It consumes whole 25GB on my ceph storage. I 
just clone that "template image" every time I deploy a new VM. 

I haven't tried creating a thin or thick provision in ceph rbd from scratch. So 
basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of 
course it will be 200GB in ceph storage since ceph duplicates the disks by 
default). 
--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/12/2013 04:52 PM, Mario Giammarco wrote: 


In several virtualization systems you can have a virtual disk drive: 

-thick, so a thick disk of 100gb uses 100gb of space; 
-thin, so a thin disk of 100gb uses 0gb when empty and starts using space when 
the virtual machine fills it. 

So I can have a real hdd of 250gb with inside ten virtual thin disks of 1000gb 
each, if they are almost empty. 
I have checked again and ceph rbd are "thin". 

BTW: I thank you for you explanation of persistent/not persistent, I was not 
able to find it in docs. Can you explain me also what a "volatile disk" is? 
A not persistent image is writeable? 
When you reboot a vm with a not persistent image you lose all datda written to 
it? 

Thanks again, 
Mario 


2013/12/12 Kenneth < [email protected] > 

<blockquote>



Hi, 

Can you elaborate more on what you want to achieve? 

If you have a 100GB image and it is set to persistent, you can instantiate that 
image immediately and deploy/live migrate it to any nebula node. Only one 
running instance of VM of this image is allowed. 

If it is a 100GB non persistent image, you'll have to wait for ceph to "create 
a copy" of it once you deploy it. But you can use this image multiple times 
simutaneously. 
--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/11/2013 07:28 PM, Mario Giammarco wrote: 

<blockquote>

Hello, 
I am using ceph with opennebula. 
I have created a 100gb disk image and I do not understand if it is thin or 
thick. 

I hope I can have thin provision. 

Thanks, 
Mario 
_______________________________________________
Users mailing list [email protected] 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 




</blockquote>


</blockquote>

_______________________________________________ 
Users mailing list 
[email protected] 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to