On 21/08/2023 12:38, Frank Schilder wrote:
Hi Angelo,

was this cluster upgraded (major version upgrade) before these issues started? 
We observed that with certain paths of a major version upgrade and the only way 
to fix that was to re-deploy all OSDs step by step.

You can try a rocks-DB compaction first. If that doesn't help, rebuilding the 
OSDs might be the only way out.

You should also confirm that all ceph-daemons are on the same version and that 
require-osd-release is reporting the same major version as well:

ceph report | jq '.osdmap.require_osd_release'


Hey Frank,

No, this cluster was clean installed with 17.2.6! All quincy.

Angelo.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to