Hi guys.
I get PG inconsistencies constantly. I have a small, 3-node
lab cluster, eg.
-> $ ceph health detail
HEALTH_ERR 4 scrub errors; Possible data damage: 4 pgs
inconsistent
[ERR] OSD_SCRUB_ERRORS: 4 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 4 pgs inconsistent
pg 5.15 is active+clean+inconsistent, acting [5,3,4]
pg 5.4f is active+clean+inconsistent, acting [5,4,3]
pg 5.64 is active+clean+inconsistent, acting [5,3,4]
pg 5.78 is active+clean+inconsistent, acting [3,4,5]
I read but I failed to find a doc/howto on relevant
tweaking/setting - could you point me to such a doc, if one
exists, where it is covered?
Also, are there perhaps different best-practices on small VS
large cluster & manual VS auto PG heal/repair?
Say - should PG repair be always set/delegated to self/auto
VS to never?
many thanks, L.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io