Hi,

> I’m having a (strange) issue with OSD bucket persistence / affinity on my 
> test cluster..   
> 
> The cluster is PoC / test, by no means production.   Consists of a single OSD 
> / MON host + another MON running on a KVM VM.  
> 
> Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the ssd 
> bucket in my CRUSH map.   This works fine when either editing the CRUSH map 
> by hand (exporting, decompile, edit, compile, import), or via the ceph osd 
> crush set command:
> 
> "ceph osd crush set osd.11 0.140 root=ssd”
> 
> I’m able to verify that the OSD / MON host and another MON I have running see 
> the same CRUSH map.     
> 
> After rebooting OSD / MON host, both osd.10 and osd.11 become part of the 
> default bucket.   How can I ensure that ODSs persist in their configured 
> buckets?

I guess you have set "osd crush update on start = true" 
(http://ceph.com/docs/master/rados/operations/crush-map/ ) and only the default 
„root“-entry.

Either fix the „root“-Entry in the ceph.conf or set osd crush update on start = 
false.

greetings

Johannes
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to