I assume you also did ceph osd crush remove osd.<id>. When you removed the osd
that was down/out and balanced off of, you changed the weight of the host that
it was on which triggers additional backfilling to balance the crush map.
Turner | Cloud Operations Engineer | StorageCraft Technology
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943
If you are not the intended recipient of this message or received it
erroneously, please notify the sender and delete it, together with any
attachments, and be advised that any dissemination or copying of this message
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M Ranga Swami
Sent: Thursday, December 01, 2016 3:03 AM
Subject: [ceph-users] node and its OSDs down...
One of my ceph node with 20 OSDs down...After a couple of hours, ceph health is
in OK state.
Now, I tried to remove those OSDs, which were down state from ceph cluster...
using the "ceh osd remove osd.<id>"
then ceph clsuter started rebalancing...which is strange ..because thsoe OSDs
are down for a long time and health also OK..
my question - why recovery or reblance started when I remove the OSD (which was
ceph-users mailing list