Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread David Turner
First you stop the service, then make sure they're down, out, crush remove, auth del, and finally osd rm. You had it almost in the right order, but you were down and outing them before you stopped them. That would allow them to mark themselves back up and in. The down and out commands don't

Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
Here is what I did - bash history: > 1897 for n in 6 7 14 15 16 17 18 19 3 9 10 11 12 20; do ceph osd down > osd.$n; done> 1920 for n in 6 7 14 15 16 17 18 19 3 9 10 11 12 20; do ceph > osd out osd.$n; done > 1921 for n in 6 7 14 15 16 17 18 19 3 9 10 11 12 20; do ceph osd down > osd.$n;

Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
Hi David. So far everything else is fine. > 46 osds: 46 up, 46 in; 1344 remapped pgs And the rm gives: > root@kong[/0]:~ # ceph osd rm 19 > osd.19 does not exist. > root@kong[/0]:~ # ceph osd rm 20 > osd.20 does not exist. The "devices" do NOT show up in "ceph osd tree" or "ceph osd df"...

Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread David Turner
What is the output of `ceph osd stat`? My guess is that they are still considered to be part of the cluster and going through the process of removing OSDs from your cluster is what you need to do. In particular `ceph osd rm 19`. On Fri, Feb 16, 2018 at 2:31 PM Karsten Becker

[ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
Hi. during the reorgainzation of my cluster I removed some OSDs. Obviously something went wrong for 2 of them, osd.19 and osd.20. If I get my current Crush map, decompile and edit them, I see 2 orphaned/stale entries for the former OSDs: > device 16 osd.16 class hdd > device 17 osd.17 class hdd