Took a little walk and figured it out.
I just added a dummy osd.20 with weight 0.000 in my CRUSH map and set it.
This alone was enough for my cluster to assume that only this osd.20 was
orphant - others disappeared.
Then I just did
$ ceph osd crush remove osd.20
and now my cluster has no orphaned OSDs.
Case closed.

2017-12-19 10:39 GMT+03:00 Vladimir Prokofev <[email protected]>:

> Hello.
>
> After some furious "ceph-deploy osd prepare/osd zap" cycles to figure out
> a correct command for ceph-deploy to create a bluestore HDD with wal/db
> SSD, I now have orphant OSDs, which are nowhere to be found in CRUSH map!
>
> $ ceph health detail
> HEALTH_WARN 4 osds exist in the crush map but not in the osdmap
> ....
> OSD_ORPHAN 4 osds exist in the crush map but not in the osdmap
>     osd.20 exists in crush map but not in osdmap
>     osd.30 exists in crush map but not in osdmap
>     osd.31 exists in crush map but not in osdmap
>     osd.32 exists in crush map but not in osdmap
>
> $ ceph osd crush remove osd.30
> device 'osd.30' does not appear in the crush map
> $ ceph osd crush remove 30
> device '30' does not appear in the crush map
>
> If I get CRUSH map with
> $ ceph osd getcrushmap -o crm
> $ crushtool -d crm -o crm.d
> I don't see any mentioning of those OSDs there either.
>
> I don't see this affecting my cluster in any way(yet), so as for now this
> is a cosmetic issue.
> But I'm worried it may somehow affect it in the future(not too much, as I
> don't really see this happening), and what's worse, that cluster will not
> return to "healty" state after it completes remapping/fixing degraded PGs.
>
> Any ideas how to fix this?
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to