https://tracker.ceph.com/issues/41255 is probably reporting the same issue.
On Thu, Aug 22, 2019 at 6:31 PM Lars Täuber wrote:
>
> Hi there!
>
> We also experience this behaviour of our cluster while it is moving pgs.
>
> # ceph health detail
> HEALTH_ERR 1 MDSs report slow metadata IOs; Reduced
Hi there!
We also experience this behaviour of our cluster while it is moving pgs.
# ceph health detail
HEALTH_ERR 1 MDSs report slow metadata IOs; Reduced data availability: 2 pgs
inactive; Degraded data redundancy (low space): 1 pg backfill_toofull
MDS_SLOW_METADATA_IO 1 MDSs report slow
Just chiming in to say that I too had some issues with backfill_toofull PGs,
despite no OSD's being in a backfill_full state, albeit, there were some
nearfull OSDs.
I was able to get through it by reweighting down the OSD that was the target
reported by ceph pg dump | grep 'backfill_toofull'.
Hello
After increasing number of PGs in a pool, ceph status is reporting
"Degraded data redundancy (low space): 1 pg backfill_toofull", but I
don't understand why, because all OSDs seem to have enough space.
ceph health detail says:
pg 40.155 is active+remapped+backfill_toofull, acting