Hi,

thanks for this clarification.

I'm running a 7-node-cluster and this risk should be managable.


Am 16.03.2020 um 16:57 schrieb Anthony D'Atri:
> He means that if eg. you enforce 1 copy of a PG per rack, that any upmaps you 
> enter don’t result in 2 or 3 in the same rack.  If your CRUSH poilicy is one 
> copy per *host* the danger is even higher that you could have data become 
> unavailable or even lost in case of a failure.
>
>> On Mar 16, 2020, at 7:45 AM, Thomas Schneider <[email protected]> wrote:
>>
>> Hi Wido,
>>
>> can you please share some detailed instructions how to do this?
>> And what do you mean with "respect your failure domain"?
>>
>> THX
>>
>> Am 04.03.2020 um 11:27 schrieb Wido den Hollander:
>>> On 3/4/20 11:15 AM, Thomas Schneider wrote:
>>>> Hi,
>>>>
>>>> Ceph balancer is not working correctly; there's an open bug
>>>> <https://tracker.ceph.com/issues/43752> report, too.
>>>>
>>>> Until this issue is not solved, I need a workaround because I get more
>>>> and more warnings about "nearfull osd(s)".
>>>>
>>>> Therefore my question is:
>>>> How can I forcibly move PGs from full OSD to empty OSD?
>>> Yes, you could manually create upmap items to map PGs to a specific OSD
>>> and offload another one.
>>>
>>> This is what the balancer also does. Keep in mind though that you should
>>> respect your failure domain (host, rack, etc) when creating these mappings.
>>>
>>> Wido
>>>
>>>> THX
>>>> _______________________________________________
>>>> ceph-users mailing list -- [email protected]
>>>> To unsubscribe send an email to [email protected]
>>>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to