Depending on which hypervisor he's using, it may not be possible to mount the 
RBD's natively.

For instance, the elephant in the room... ESXi.

I've pondered several architectures for presentation of Ceph to ESXi which may 
be related to this thread.

1) Large RBD's (2TB-512B), re-presented through an iSCSI gateway (hopefully in 
a HA config pair). VMFS, with VMDK's on top.
        * Seems to have been done a couple of times already, not sure of the 
success.
        * Small number of RBD's required, so not a frequent task. Perhaps 
dev-time in doing the automation provisioning can be reduced.

2) Large CephFS volumes (20+ TB), re-presented through NFS gateways. VMDK's on 
top.
        * Less abstraction layers, hopefully better pass-through of commands.
                * Any improvements of CephFS should be available to vmware. 
(De-dupe for instance).
        * Easy to manage from a vmware perspective, NFS is pretty commonly 
deployed, large volumes.
        * No multi-MDS means this is not viable... yet.

3) Small RBD's, (10's-100's GB), represented through iSCSI gateway, RDM to VM's 
directly.
        *Possibly more appropriate for Ceph (lots of small RBDs)
        * Harder to manage, more automation will be required for provisioning
        * Cloning of templates etc may be harder.

Just my 2c anyway....

Douglas Youd
Cloud Solution Architect
ZettaGrid



-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of McNamara, Bradley
Sent: Friday, 12 July 2013 8:19 AM
To: Alex Bligh; Gilles Mocellin
Cc: [email protected]
Subject: Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

Correct me if I'm wrong, I'm new to this, but I think the distinction between 
the two methods is that using 'qemu-img create -f rbd' creates an RBD for 
either a VM to boot from, or for mounting within a VM.  Whereas, the OP wants a 
single RBD, formatted with a cluster file system, to use as a place for 
multiple VM image files to reside.

I've often contemplated this same scenario, and would be quite interested in 
different ways people have implemented their VM infrastructure using RBD.  I 
guess one of the advantages of using 'qemu-img create -f rbd' is that a 
snapshot of a single RBD would capture just the changed RBD data for that VM, 
whereas a snapshot of a larger RBD with OCFS2 and multiple VM images on it, 
would capture changes of all the VM's, not just one.  It might provide more 
administrative agility to use the former.

Also, I guess another question would be, when a RBD is expanded, does the 
underlying VM that is created using 'qemu-img  create -f rbd' need to be 
rebooted to "see" the additional space.  My guess would be, yes.

Brad

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Alex Bligh
Sent: Thursday, July 11, 2013 2:03 PM
To: Gilles Mocellin
Cc: [email protected]
Subject: Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?


On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:

> Hello,
>
> Yes, you missed that qemu can use directly RADOS volume.
> Look here :
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
> Create :
> qemu-img create -f rbd rbd:data/squeeze 10G
>
> Use :
>
> qemu -m 1024 -drive format=raw,file=rbd:data/squeeze

I don't think he did. As I read it he wants his VMs to all access the same 
filing system, and doesn't want to use cephfs.

OCFS2 on RBD I suppose is a reasonable choice for that.

--
Alex Bligh




_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


________________________________

ZettaServe Disclaimer: This email and any files transmitted with it are 
confidential and intended solely for the use of the individual or entity to 
whom they are addressed. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately if you have received this email by mistake and delete this email 
from your system. Computer viruses can be transmitted via email. The recipient 
should check this email and any attachments for the presence of viruses. 
ZettaServe Pty Ltd accepts no liability for any damage caused by any virus 
transmitted by this email.

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to