On Wed, Oct 24, 2018 at 1:43 AM Florent B <[email protected]> wrote:

> Hi,
>
> On a Luminous cluster having some misplaced and degraded objects after
> outage :
>
> health: HEALTH_WARN
>             22100/2496241 objects misplaced (0.885%)
>             Degraded data redundancy: 964/2496241 objects degraded
> (0.039%), 3 p
> gs degraded
>
> I can that Ceph gives priority on replacing objects instead of repairing
> degraded ones.
>
> Number of misplaced objects is decreasing, while number of degraded
> objects does not decrease.
>
> Is it expected ?


In general it’s not, but as you only have 3 degraded PGs and appear to have
several more misplaced ones, you are a lot more likely to have reached some
kind of edge condition where the OSD-local scheduling decisions aren’t
working out quite right at the global level. Or it may be that there’s a
CRUSH error and it can’t figure out where to place new replicas of the PGs
at all.

Can you provide a full osd and pg dump in addition to all of “ceph -s”?



>
> Florent
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to