For anyone that encounters this in the future, I was able to resolve
the issue by finding the three osd's that the object is on. One by one
I stop the osd, flushed the journal and used the objectstore tool to
remove the data (sudo ceph-objectstore-tool --data-path
/var/lib/ceph/osd/ceph-19
What I'm thinking about trying is using the ceph-objectstore-tool to
remove the offending clone metadata. From the help the syntax is this:
ceph-objectstore-tool ... remove-clone-metadata
i.e. something like for my object and expected clone from the log message
ceph-objectstore-tool
Hi Lincoln,
Yes the object is 0-bytes on all OSD's. Has the same filesystem
date/time too. Before I removed the rbd image (migrated disk to
different pool) it was 4MB on all the OSD's and md5 checksum was the
same on all so it seems that only metadata is inconsistent.
Thanks for your suggestion, I
Hi Rich,
Is the object inconsistent and 0-bytes on all OSDs?
We ran into a similar issue on Jewel, where an object was empty across the
board but had inconsistent metadata. Ultimately it was resolved by doing a
"rados get" and then a "rados put" on the object. *However* that was a last
ditch
Hi Everyone,
In our cluster running 0.94.10 we had a pg pop up as inconsistent
during scrub. Previously when this has happened running ceph pg repair
[pg_num] has resolved the problem. This time the repair runs but it
remains inconsistent.
~$ ceph health detail
HEALTH_ERR 1 pgs inconsistent; 2