>> is being balanced off of, set the crush weight for that osd to 0.0.
>>>>>>> Then
>>>>>>> when you fully remove the disk from the cluster it will not do any
>>>>>>> additional backfilling. Any change to the crush map will likely move
>>&
ckfilling. Any change to the crush map will likely move data around,
>>>>>> even if you're removing an already "removed" osd.
>>>>>>
>>>>>> --
>>>>>>
>>>>>> <https://storagecraf
t;>>>>
>>>>> Swami
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Dec 1, 2016 at 8:07 PM, David Turner <
>>>>> david.tur...@storagecraft.com> wrote:
>>>>>
>>>>>> I assume
raft.com>
>>>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>>>> Office: 801.871.2760 <%28801%29%20871-2760> | Mobile: 385.224.2943
>>>> <%28385%29%20224-2943>
>>>>
>>>> ------
>>>>
>>
t
>>> erroneously, please notify the sender and delete it, together with any
>>> attachments, and be advised that any dissemination or copying of this
>>> message is prohibited.
>>>
>>> --
>>>
>>> ---
://storagecraft.com>
>>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>>> Office: 801.871.2760 <(801)%20871-2760> | Mobile: 385.224.2943
>>> <(385)%20224-2943>
>>>
>>> --
>>>
>>> If you are
of this
> message is prohibited.
>
> --
>
> --
> *From:* M Ranga Swami Reddy [swamire...@gmail.com]
> *Sent:* Thursday, December 01, 2016 11:45 PM
> *To:* David Turner
> *Cc:* ceph-users
> *Subject:* Re: [ceph-user
d delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
> --
> *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M
> Ranga Swami Reddy
: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M Ranga Swami
Reddy [swamire...@gmail.com]
Sent: Thursday, December 01, 2016 3:03 AM
To: ceph-users
Subject: [ceph-users] node and its OSDs down...
Hello,
One of my ceph node with 20 OSDs down...After a couple of hours, ceph health is
in OK
Hello,
One of my ceph node with 20 OSDs down...After a couple of hours,
ceph health is in OK state.
Now, I tried to remove those OSDs, which were down state from
ceph cluster...
using the "ceh osd remove osd."
then ceph clsuter started rebalancing...which is strange ..because
thsoe OSDs are down
10 matches
Mail list logo