What I've found is the nicest way of handling this is to add all the mons to your ceph.conf file. The QEMU client will use these if you don't define any in the libvirt config.

Similarly, define a libvirt 'secret' and you can use that for auth, so you only have one place to change it. My entire libvirt config (for attaching an iso) looks like:

<disk type='network' device='cdrom'>
  <driver name='qemu' type='raw'/>
  <auth username='my-cephx-username'>
    <secret type='ceph' uuid='4cca0052-a45e-4727-a926-829b135f1e19'/>
  </auth>
  <source protocol='rbd' name='some-pool/some-rbd'/>
  <target dev='hdc' bus='ide'/>
  <readonly/>
  <address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>


On 10/16/2014 9:21 AM, Dan Geist wrote:
Thanks Dan (Doctor, doctor...)

Correct. I'd like to abstract the details of the rbd storage from the VM 
definitions as much as possible (like not having the monitor IPs/ports 
defined). I plan on experimenting with monitors and so forth on ceph and would 
like to not have to touch every single VM when changes are made. Small mods to 
the storage pool on each hypervisor are not so bad...

In your example, do you still need the "host" definitions in both disk (per VM) 
and source (per pool) stanzas? Also, do you not use cephx for authentication? I'd love to 
have that defined in the pool as well if possible, allowing per-hypervisor authentication 
instead of per-host (not necessarily for security, but for less complex managability).

Dan


----- Original Message -----
From: "Dan Ryder (daryder)" <[email protected]>
To: "Dan Geist" <[email protected]>
Cc: [email protected]
Sent: Thursday, October 16, 2014 8:41:50 AM
Subject: RE: Ceph storage pool definition with KVM/libvirt

Hi Dan,



Maybe I misunderstand what you are trying to do, but I think you are trying to 
add your Ceph RBD pool into libvirt as a storage pool?



If so, it's relatively straightforward - here's an example from my setup:



<disk type='network' device='disk'>

       <driver name='qemu' type='raw' cache='none'/>

       <source protocol='rbd' 
name='volumes/volume-f3bcec3d-7daf-4eff-818e-0d8848c120d5'>

         <host name='xxx.18.116.67' port='6789'/>

         <host name='xxx.18.116.177' port='6789'/>

         <host name='xxx.18.116.178' port='6789'/>

       </source>



Related libvirt storage pool definition is:



<pool type="rbd">

         <name>LibvirtStoragePoolName</name>

         <source>

           <name>volumes</name>

             <host name='xxx.18.116.177' port='6789'/>

             <host name='xxx.18.116.178' port='6789'/>

             <host name='xxx.18.116.67' port='6789'/>

         </source>

</pool>





Hope this helps,



Dan Ryder



-----Original Message-----
From: ceph-users [mailto:[email protected]] On Behalf Of Dan 
Geist
Sent: Wednesday, October 15, 2014 4:37 PM
To: [email protected]
Subject: [ceph-users] Ceph storage pool definition with KVM/libvirt



I'm leveraging Ceph in a vm prototyping environment currently and am having 
issues abstracting my VM definitions from the storage pool (to use a libvirt 
convention).



I'm able to use the rbd support within the disk configuration of individual VMs 
but am struggling to find a good reference for abstracting it to a storage 
pool. How do I pull the source definition from below to the pool definition?





<disk type='network' device='disk'>

   <driver name='qemu' type='raw'/>

   <auth username='libvirt'>

     <secret type='ceph' uuid='447aba2d-3507-4c1f-90c8-e60ea5ac92fb'/>

   </auth>

   <source protocol='rbd' name='libvirt-pool/Ubuntu_sample_host_disk'>

     <host name='xxx.175.240.174' port='6789'/>

     <host name='xxx.175.240.176' port='6789'/>

     <host name='xxx.175.240.178' port='6789'/>

   </source>

   <target dev='vda' bus='virtio'/>

   <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> 
</disk>





Thanks.

Dan



--

Dan Geist dan(@)polter.net

_______________________________________________

ceph-users mailing list

[email protected]<mailto:[email protected]>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to