[ceph-users] Re: PG damaged "failed_repair"

2024-03-25 Thread romain . lebbadi-breteau
flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm snappy compression_mode aggressive application rbd The error seems to come from a software error in Ceph. I see this error in the logs : "FAILED ceph_assert(clone_overlap.count(clone))" Thanks, Romain Lebbadi-Breteau ___

[ceph-users] Re: PG damaged "failed_repair"

2024-03-10 Thread Romain Lebbadi-Breteau
h 0 compression_algorithm snappy compression_mode aggressive application rbd The error seems to come from a software error in Ceph. In the logs, I get the message "FAILED ceph_assert(clone_overlap.count(clone))". Thanks, Romain Lebbadi-Breteau

[ceph-users] Re: PG damaged "failed_repair"

2024-03-08 Thread Romain Lebbadi-Breteau
1 crashes 23:48 : I start osd.3, it crashes in less than a minute 23:49 : After I mark osd.3 "in" and start it again, it comes back online with osd.0 and osd.11 soon after Best regards, Romain Lebbadi-Breteau On 2024-03-08 3:17 a.m., Eugen Block wrote: Hi, can you share more details

[ceph-users] PG damaged "failed_repair"

2024-03-06 Thread Romain Lebbadi-Breteau
  osd.0   up   1.0  1.0  5    hdd   9.09520  osd.5   up   1.0  1.0  9    hdd   9.09520  osd.9   up   1.0  1.0 romain:step@alpha-cen ~  $ sudo rados list-inconsistent-obj 2.1b {"epoch":9787,"inconsistents&q