Hi all,
I had set the crushmap by generating a crushmap file, compiling it
with crushtool and setting it in place with 'ceph osd setcrushmap'.
For testing, I grouped the disks by type, using it for different
pools. So I have 'root=default-sas' and 'root=default-ssd', and host
cephxxx-sas en cephxxx-ssd entries in my crushmap. When I stop some
OSDs, the crushmap still looks the same. Also 'ceph osd tree' still
show the OSDs in the right host bucket.
But when I try to restart the OSD, it fails, because by default it
tries to add itself to the default bucket cephxxx and root=default
(which is not existing):
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.38
--keyring=/var/lib/ceph/osd/ceph-38/keyring osd crush create-or-move
-- 38 0.17 host=ceph002 root=default'
Is it possible to change this behaviour and using the info of the
crushmap when nothing is set in the ceph.conf file? (or is this by
design?)
Thanks!
Kenneth
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com