Hey Stefan,
thanks for getting back to me!
On 10/02/2022 10:05, Stefan Schueffler wrote:
since my last mail in Dezember, we changed our ceph-setuo like this:
we added one SSD osd on each ceph host (which were pure HDD before). Then, we moved
the problematic pool "de-dus5.rgw.buckets.index“
Hey there again,
there now was a question from Neha Ojha in
https://tracker.ceph.com/issues/53663
about providing OSD debug logs for a manual deep-scrub on (inconsistent)
PGs.
I did provide the logs of two of those deep-scrubs via ceph-post-file
already.
But since data inconsistencies are
Hello Ceph-Users!
On 22/12/2021 00:38, Stefan Schueffler wrote:
The other Problem, regarding the OSD scrub errors, we have this:
ceph health detail shows „PG_DAMAGED: Possible data damage: x pgs
inconsistent.“
Every now and then new pgs get inconsistent. All inconsistent pgs
belong to the
Thanks for your response Stefan,
On 21/12/2021 10:07, Stefan Schueffler wrote:
Even without adding a lot of rgw objects (only a few PUTs per minute), we have
thousands and thousands of rgw bucket.sync log entries in the rgw log pool
(this seems to be a separate problem), and as such we
Hello Eugen,
On 20/12/2021 22:02, Eugen Block wrote:
you wrote that this cluster was initially installed with Octopus, so
no upgrade ceph wise? Are all RGW daemons on the exact same ceph
(minor) versions?
I remember one of our customers reporting inconsistent objects on a
regular basis
Hi,
you wrote that this cluster was initially installed with Octopus, so
no upgrade ceph wise? Are all RGW daemons on the exact same ceph
(minor) versions?
I remember one of our customers reporting inconsistent objects on a
regular basis although no hardware issues were detectable. They