[ceph-users] Re: PG damaged "failed_repair"

2024-03-25 Thread romain . lebbadi-breteau
compression_algorithm snappy compression_mode aggressive application rbd The error seems to come from a software error in Ceph. I see this error in the logs : "FAILED ceph_assert(clone_overlap.count(clone))" Thanks, Romain Lebbadi-Breteau ___ ceph-users ma

[ceph-users] Re: PG damaged "failed_repair"

2024-03-10 Thread Romain Lebbadi-Breteau
aggressive application rbd The error seems to come from a software error in Ceph. In the logs, I get the message "FAILED ceph_assert(clone_overlap.count(clone))". Thanks, Romain Lebbadi-Breteau ___ ceph-users mailing list -- ceph-use

[ceph-users] Re: PG damaged "failed_repair"

2024-03-08 Thread Romain Lebbadi-Breteau
23:49 : After I mark osd.3 "in" and start it again, it comes back online with osd.0 and osd.11 soon after Best regards, Romain Lebbadi-Breteau On 2024-03-08 3:17 a.m., Eugen Block wrote: Hi, can you share more details? Which OSD are you trying to get out, the primary osd.3? Can y

[ceph-users] PG damaged "failed_repair"

2024-03-06 Thread Romain Lebbadi-Breteau
  1.0  5    hdd   9.09520  osd.5   up   1.0  1.0  9    hdd   9.09520  osd.9   up   1.0  1.0 romain:step@alpha-cen ~  $ sudo rados list-inconsistent-obj 2.1b {"epoch":9787,"inconsistents":[]} romain:step@alpha-ce