Oh, sorry, forgot to mention - this cluster is running jewel :(

On 13/02/18 12:10, John Spray wrote:
On Tue, Feb 13, 2018 at 10:38 AM, Josef Zelenka
<josef.zele...@cloudevelops.com> wrote:
Hi everyone, one of the clusters we are running for a client recently had a
power outage, it's currently in a working state, however 3 pgs were left
inconsistent atm, with this type of error in the log(when i attempt to ceph
pg repair it)

2018-02-13 09:47:17.534912 7f3735626700 -1 log_channel(cluster) log [ERR] :
repair 15.1e32 15:4c7eed31:::10002110e12.0000004b:head on disk size (0) does
not match object info size (4194304) adjusted for ondisk to (4194304)

i know this can be fixed by truncating the ondisk object to the expected
size, but it clearly means we've lost some data. This cluster is used for
cephfs only, so i'd like to find which files on the cephfs were affected. I
know the OSDs for that pg, i know which pg and which object was affected, so
i hope it's possible. I found a 2015 entry in the mailing list, that does
the reverse thing
as in - map file to pg/object. I have 230TB of data in that cluster in a lot
of files, so mapping them all would take a long time. I hope there is a way
to do this, if people here have any idea/experience with this, it'd be
We added a tool in luminous that does this:



Josef Zelenka

ceph-users mailing list

ceph-users mailing list

Reply via email to