[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2022-02-10 Thread Christian Rohmann
Hey Stefan, thanks for getting back to me! On 10/02/2022 10:05, Stefan Schueffler wrote: since my last mail in Dezember, we changed our ceph-setuo like this: we added one SSD osd on each ceph host (which were pure HDD before). Then, we moved the problematic pool "de-dus5.rgw.buckets.index“

[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2022-02-08 Thread Christian Rohmann
Hey there again, there now was a question from Neha Ojha in https://tracker.ceph.com/issues/53663 about providing OSD debug logs for a manual deep-scrub on (inconsistent) PGs. I did provide the logs of two of those deep-scrubs via ceph-post-file already. But since data inconsistencies are

[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2022-02-07 Thread Christian Rohmann
Hello Ceph-Users! On 22/12/2021 00:38, Stefan Schueffler wrote: The other Problem, regarding the OSD scrub errors, we have this: ceph health detail shows „PG_DAMAGED: Possible data damage: x pgs inconsistent.“ Every now and then new pgs get inconsistent. All inconsistent pgs belong to the

[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2021-12-21 Thread Christian Rohmann
Thanks for your response Stefan, On 21/12/2021 10:07, Stefan Schueffler wrote: Even without adding a lot of rgw objects (only a few PUTs per minute), we have thousands and thousands of rgw bucket.sync log entries in the rgw log pool (this seems to be a separate problem), and as such we

[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2021-12-21 Thread Christian Rohmann
Hello Eugen, On 20/12/2021 22:02, Eugen Block wrote: you wrote that this cluster was initially installed with Octopus, so no upgrade ceph wise? Are all RGW daemons on the exact same ceph (minor) versions? I remember one of our customers reporting inconsistent objects on a regular basis

[ceph-users] Re: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools (bug 53663)

2021-12-20 Thread Eugen Block
Hi, you wrote that this cluster was initially installed with Octopus, so no upgrade ceph wise? Are all RGW daemons on the exact same ceph (minor) versions? I remember one of our customers reporting inconsistent objects on a regular basis although no hardware issues were detectable. They