OK, I remember now, if I don't remove the OSD from the CRUSH, ceph-disk
will get a new OSD ID and the old one will hang around as a zombie. This
will change the host/rack/etc weights causing cluster wide rebalance.

On Tue, Apr 14, 2015 at 9:31 AM, Robert LeBlanc <[email protected]>
wrote:

> Hmmm....I've been deleting the OSD (ceph osd rm X; ceph osd crush rm
> osd.X) along with removing the auth key. This has caused data movement, but
> reading your reply and thinking about it made me think it should be done
> differently. I should just remove the auth key and leave the OSD in the
> CRUSH map. That should work, I'll test it on my cluster.
>
> I'd still like to know the difference between norecover and nobackfill if
> anyone knows.
>
> On Mon, Apr 13, 2015 at 7:40 PM, Francois Lafont <[email protected]>
> wrote:
>
>> Hi,
>>
>> Robert LeBlanc wrote:
>>
>> > What I'm trying to achieve is minimal data movement when I have to
>> service
>> > a node to replace a failed drive. [...]
>>
>> I will perhaps say something stupid but it seems to me that it's the
>> goal of the "noout" flag, isn't it?
>>
>> 1. ceph osd set noout
>> 2. an old OSD disk failed, no rebalancing of data because noout is set,
>> the cluster is just degraded.
>> 3. You remove of the cluster the OSD daemon which used the old disk.
>> 4. You power off the host and replace the old disk by a new disk and you
>> restart the host.
>> 5. You create a new OSD on the new disk.
>>
>> With these steps, there will be no movement of data. Only during the step
>> 5
>> where the data will be recreated in the new disk (but it's normal and
>> desired).
>>
>> Sorry in advance if there is something I'm missing in your problem.
>> Regards.
>>
>>
>> --
>> François Lafont
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to