Running ceph-pacific 16.2.9 using ceph orchestrator. We made a mistake adding a disk to the cluster and immediately issued a command to remove it using "ceph orch osd rm ### --replace --force".
This OSD had no data on it at the time and was removed after just a few minutes. "ceph orch osd rm status" shows that it is still "draining". ceph osd df shows that the osd being removed has -1 PGs. So - why is the simple act of removal taking so long and can we abort it and manually remove that osd somehow? Note: the cluster is also doing a rebalance while this is going on, but the osd being removed never had any data and should not be affected by the rebalance. thanks! _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
