I've tried using ceph-deploy but it wants to assign the same id for each osd 
and I end up with a bunch of "prepared" ceph-disk's and only 1 "active". If I 
use the manual "short form" method the activate step fails and there are no xfs 
mount points on the ceph-disks. If I use the manual "long form" it seems like 
I'm the closest to getting active ceph-disks/osd's but the monitor always shows 
the osds as "down/in" and the ceph-disks don't persist over a boot cycle.

Is there a document anywhere that anyone knows of that explains a step by step 
process for bringing up multiple osd's per host - 1 hdd with ssd journal 
partition per osd?
Thanks,
Bruce
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to