Jon,
I think I understand what you are trying to do, but I think it doesn't quite
work that way. Let me try to explain (and please let me know if I don't explain
it well enough ;-))
I don't think that you can use Ceph directly as a system datastore. The way the
Ceph datastore driver works for migrations is leveraging whatever transfer
method you have for the system datastore to perform the migration. For example,
if you use the 'shared' system datastore, then it will use that transfer
manager's pre and post migration drivers. For 'ssh', the ssh drivers, and so
on. The way the Ceph datastore is implemented is as Ceph Block Devices, so
unfortunately there is not a way to use it as a simple shared volume.
There are 2 potential solutions for getting live migrations working for your
Ceph datastore VMs:
* Create a shared NFS volume (or other 'sharable' filesystem, like GFS2,
OCFS2, etc., however these are much more complicated to configure and usually
not worth the hassle) and have the shared volume mounted to the same location
on each hypervisor node. In a previous test deployment, we just exported out
the /var/lib/one/vms directory to the hypervisors. At this point, all of the
hypervisors should be able to see the deployment files in the same location and
you should be able to perform a migration.
* Use SSH as the transfer manager for your system datastore, and modify the
pre and post-migrate scripts to copy the deployment files from the current VM
host to the target VM host. This is the method we use currently in our
deployment, as it is one less configuration step that we have to worry about
maintaining on each node, and makes expanding our cluster much quicker and
easier. I can share with you the pre and post-migrate scripts we use if you
like.
Let me know if the above makes sense, and of course if you need any additional
help please don't hesitate to bug me. I'm very familiar with the Ceph drivers
;-)
----- Original Message -----
From: "Jon" <[email protected]>
To: "Users OpenNebula" <[email protected]>
Sent: Tuesday, July 9, 2013 8:05:51 PM
Subject: [one-users] How to use Ceph/RBD for System Datastore
Hello All,
I am using Ceph as my storage back end and would like to know how to configure
the system datastore, such that I can live migrate vms.
Following the directions, I thought I could create a datastore, format it, and
mount it at /var/lib/one/datastores/0 , however, I discovered, that isn't quite
how things work.
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/001913.html
You can read more about that at the above link, but long story short, to mount
a shared filesystem it has to be a "clustered" filesystem (I think CephFS is
the "clustered filesystem", in this case).
I attempted to modify my system datastore config, however, I was unable to
change the DS_MAD parameter, and vm creation errors out telling me there's no
/var/lib/one/remotes/tm/ceph/mkswap driver (there isn't)
>> oneadmin@red6:~$ onedatastore show 0
>> DATASTORE 0 INFORMATION
>> ID : 0
>> NAME : system
>> USER : oneadmin
>> GROUP : oneadmin
>> CLUSTER : -
>> TYPE : SYSTEM
>> DS_MAD : -
>> TM_MAD : ceph
>> BASE PATH : /var/lib/one/datastores/0
>> DISK_TYPE : FILE
>>
>> PERMISSIONS
>> OWNER : um-
>> GROUP : u--
>> OTHER : ---
>>
>> DATASTORE TEMPLATE
>> DISK_TYPE="rbd"
>> DS_MAD="-"
>> TM_MAD="ceph"
>> TYPE="SYSTEM_DS"
>>
>> IMAGES
Maybe I'm just confused. Can anyone provide some guidance on setting ceph up as
the system datastore?
Thanks,
Jon A
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
NOTICE: Protect the information in this message in accordance with the
company's security policies. If you received this message in error, immediately
notify the sender and destroy all copies._______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org