the
>
> From: huang jun
> To: Tarek Zegar
> Cc: Paul Emmerich , Ceph Users <
> ceph-users@lists.ceph.com>
> Date: 06/08/2019 05:27 AM
> Subject: [EXTERNAL] Re: [ceph-users] Reweight OSD to 0, why doesn't
> report degraded if UP set under Pool Size
> ---
/2019 05:27 AM
Subject:[EXTERNAL] Re: [ceph-users] Reweight OSD to 0, why doesn't
report degraded if UP set under Pool Size
i think the write data will also write to the osd.4 in this case.
bc your osd.4 is not down, so the ceph don't think the pg have some osd
down
rich ---06/07/2019 05:25:23
> AM---remapped no longer triggers a health warning in nautilus. Y]Paul
> Emmerich ---06/07/2019 05:25:23 AM---remapped no longer triggers a health
> warning in nautilus. Your data is still there, it's just on the
>
> From: Paul Emmerich
> To: Tarek Zegar
>
-users] Reweight OSD to 0, why doesn't
report degraded if UP set under Pool Size
remapped no longer triggers a health warning in nautilus.
Your data is still there, it's just on the wrong OSD if that OSD is still
up and running.
Paul
--
Paul Emmerich
Looking for help with your Ceph
remapped no longer triggers a health warning in nautilus.
Your data is still there, it's just on the wrong OSD if that OSD is still
up and running.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
For testing purposes I set a bunch of OSD to 0 weight, this correctly
forces Ceph to not use said OSD. I took enough out such that the UP set
only had Pool min size # of OSD (i.e 2 OSD).
Two Questions:
1. Why doesn't the acting set eventually match the UP set and simply point
to [6,5] only
2.