Hi, our ceph cluster reported an inconsistent pg, so we set it to repair:
# ceph pg repair 4.b10 # ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent [ERR] OSD_SCRUB_ERRORS: 1 scrub errors [ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistentpg 4.b10 is active+clean+scrubbing+deep+inconsistent+repair, acting [109,10,214,148,60,58,199,129,326,165,35]
# ceph pg stat6401 pgs: 1 active+clean+scrubbing+deep+inconsistent+repair, 147 active+clean+scrubbing, 146 active+clean+scrubbing+deep, 6107 active+clean; 1.1 PiB data, 1.5 PiB used, 2.4 PiB / 3.9 PiB avail; 19 MiB/s rd, 11 MiB/s wr, 14 op/s
When checking the osd logs on the osd server for osd.109 we just see every couple of seconds repeats of the following messages (which I'm not aware of having seen before when we used pg repair on other occasions):
[...]2025-07-04T09:08:05.076+0000 7f3a063a7700 0 log_channel(cluster) log [INF] : osd.109 pg 4.b10s0 Deep scrub errors, upgrading scrub to deep-scrub 2025-07-04T09:08:05.076+0000 7f39f1b7e700 0 log_channel(cluster) log [DBG] : 4.b10 repair starts 2025-07-04T09:08:06.123+0000 7f3a063a7700 0 log_channel(cluster) log [INF] : osd.109 pg 4.b10s0 Deep scrub errors, upgrading scrub to deep-scrub 2025-07-04T09:08:06.123+0000 7f39f1b7e700 0 log_channel(cluster) log [DBG] : 4.b10 repair starts 2025-07-04T09:08:10.173+0000 7f3a063a7700 0 log_channel(cluster) log [INF] : osd.109 pg 4.b10s0 Deep scrub errors, upgrading scrub to deep-scrub 2025-07-04T09:08:10.196+0000 7f39f1b7e700 0 log_channel(cluster) log [DBG] : 4.b10 repair starts 2025-07-04T09:08:12.162+0000 7f3a063a7700 0 log_channel(cluster) log [INF] : osd.109 pg 4.b10s0 Deep scrub errors, upgrading scrub to deep-scrub 2025-07-04T09:08:12.162+0000 7f39f1b7e700 0 log_channel(cluster) log [DBG] : 4.b10 repair starts
[...] Is this something to worry about? Best Dietmar
OpenPGP_signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io