On Thu, Aug 22, 2013 at 03:32:35PM +0200, raj kumar wrote:
>    ceph cluster is running fine in centos6.4.                                 
>   
>    Now I would like to export the block device to client using rbd.           
>   
>    my question is,                                                            
>   
>    1. I used to modprobe rbd in one of the monitor host. But I got error,     
>   
>       FATAL: Module rbd not found                                             
>   
>       I could not find rbd module. How can i do this?                         
>   

# cat /etc/centos-release
CentOS release 6.4 (Final)

# updatedb
# locate rbd.ko
/lib/modules/3.8.13/kernel/drivers/block/rbd.ko

# locate virtio_blk.ko
/lib/modules/2.6.32-358.14.1.el6.x86_64/kernel/drivers/block/virtio_blk.ko
/lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/block/virtio_blk.ko
/lib/modules/3.8.13/kernel/drivers/block/virtio_blk.ko

Well, the standard CentOS-6.4 kernel does not include 'rbd.ko'.
For some reasons the 'Enterprise distros' (RHEL, SLES) disabled the Ceph Kernel
components by default, although the CephFS (= ceph.ko) is in the upstream Kernel
until 2.6.34, and the Block-Device (= rbd.ko) until 2.6.37.

We build our own Kernel 3.8.13 (a good mixture of recent & muture) and put it 
into CentOS-6.4.

>    2. Once the rbd is created. Do we need to create iscsi target in one of a  
>   
>    monitor host and present the lun to client. If so what if the monitor host 
>   
>    goes down. so what is the best practice to provide a lun to clients.       
>   
>    thanks                                                                     
>   
This depends on your Client.
Using 
  "RADOS - Block-Layer - RBD-Driver - iSCSI-TGT // iSCSI-INI - Client"
is a waste of stack overhead.
If the client is kvm-qemu you can use
  "RADOS // librbd - kvm-qemu"
or 
  "RADOS // Block-Layer - RBD-Driver - Client"

The "//" symbolized the border between Server-nodes and client-nodes.

-Dieter

>    Raj                                                                        
>   

> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to