Hi,

I've seen and read a few things about ceph-crush-location and I think that's 
what I need.
What I need (want to try) is : a way to have SSDs in non-dedicated hosts, but 
also to put those SSDs in a dedicated ceph root.

>From what I read, using ceph-crush-location, I could add a hostname with a SSD 
>suffix in case the tool is called against a SSD... thing is : I must make sure 
>this is a SSD, and this is where coding and experimenting comes.

Hence, I'd like to know if someone would have an already working implementation 
that would detect if the OSD is a SSD, and if so, append a string to the 
hostname ?
I'm for instance wondering when this tool is called, if the OSD is already 
mounted (or should have been...), what happens at boot ...

I know I can get the OSD mountpoint using something like that on a running OSD :
#ceph --format xml --admin-daemon /var/run/ceph/ceph-osd.0.asok config get 
osd_data|sed -e 's|./osd_data.*||;s|.*osd_data.||'
/var/lib/ceph/osd/ceph-0

I know I can find out if this is a disk or a SSD using for instance this :
[root@ceph0 ~]# cat /sys/block/sdy/queue/rotational
0
[root@ceph0 ~]# cat /sys/block/sda/queue/rotational
1

So I just have to associate the mountpoint with the device... provided OSD is 
mounted when the tool is called.
Anyone willing to share experience with ceph-crush-location ?

Thanks

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to