Hi, You should check for inconsistency root cause. https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent <https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent>
- Etienne Menguy [email protected] > On 20 Oct 2021, at 09:21, Szabo, Istvan (Agoda) <[email protected]> > wrote: > > Have you tried to repair pg? > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: [email protected]<mailto:[email protected]> > --------------------------------------------------- > > On 2021. Oct 20., at 9:04, Glaza <[email protected]> wrote: > > Email received from the internet. If in doubt, don't click any link nor open > any attachment ! > ________________________________ > > Hi Everyone, I am in the process of > upgrading nautilus (14.2.22) to octopus (15.2.14) on centos7 (Mon/Mgr > were additionally migrated to centos8 beforehand). Each day I upgraded > one host and after all osd's were up, I manually compacted them one by > one. Today (8 hosts upgraded, 7 still to go) I started > getting errors like "Possible data damage: 1 pg inconsistent". For the > first time it was "acting [56,58,62]" but I thought "OK" in > osd.62 logs > there are many lines like "osd.62 39892 class rgw_gc open got (1) > Operation not permitted" Maybe rgw did not cleaned some omaps properly, > and ceph did not noticed it until scrub happened. But now I have got > "acting [56,57,58]" and none of this osd's has those errors with > rgw_gc > in logs. All affected osd's are octopus 15.2.14 on NVMe hosting > default.rgw.buckets.index pool. Has anyone experience with this problem? > Any help appreciated. > > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
