Thank you, this is extremely helpful!

Unfortunately, none of the inodes mentioned in the `stray/<inode>` logs are 
present in the output of `rados -p cephfs_data ls`

Am I correct in assuming that this means they’re gone for good?

I did shut down the MDS with `ceph fs fail cephfs` when we noticed the issue, 
but that appears to have been too slow.

-Mike

From: Gregory Farnum <gfar...@redhat.com>
Date: Tuesday, June 14, 2022 at 12:15 PM
To: Michael Sherman <sherm...@uchicago.edu>
Cc: ceph-users@ceph.io <ceph-users@ceph.io>
Subject: Re: [ceph-users] Possible to recover deleted files from CephFS?
On Tue, Jun 14, 2022 at 8:50 AM Michael Sherman <sherm...@uchicago.edu> wrote:
>
> Hi,
>
> We discovered that a number of files were deleted from our cephfs filesystem, 
> and haven’t been able to find current backups or snapshots.
>
> Is it possible to “undelete” a file by modifying metadata? Using 
> `cephfs-journal-tool`, I am able to find the `unlink` event for each file, 
> looking like the following:
>
> $ cephfs-journal-tool --rank cephfs:all event get 
> --path="images/060862a9-a648-4e7e-96e3-5ba3dea29eab" list
> …
> 2022-06-09 17:09:20.123155 0x170da7fc UPDATE:  (unlink_local)
>   stray5/10000001fee
>   images/060862a9-a648-4e7e-96e3-5ba3dea29eab
>
> I saw the disaster-recovery-tools mentioned 
> here<https://docs.ceph.com/en/nautilus/cephfs/disaster-recovery-experts/#disaster-recovery-experts>,
>  but didn’t know if they would be helpful in the case of a deletion.
>
> Thank you in advance for any help.

Once files are unlinked they get moved into the stray directory, and
then into the purge queue when they are truly unused.

The purge queue processes them and deletes the backing objects.

So the first thing you should do is turn off the MDS, as that is what
performs the actual deletions.

If you've already found the unlink events, you know the inode numbers
you want. You can look in rados for the backing objects and just copy
them out (and reassemble them if the file was >4MB). CephFS files are
stored in RADOS with the pattern <inode number in hex>.<object number
in file>. If your cluster isn't too big, you can just:
rados -p <cephfs data pool> ls | grep 10000001fee
for the example file you referenced above. (Or more probably, dump the
listing into a file and search that for the inode numbers).

If listing all the objects takes too long, you can construct the
object names in the other direction, which is simple enough but I
can't recall offhand the number of digits you start out with for the
<object number in file> portion of the object name, so you'll have to
look at one and figure that out yourself. ;)


The disaster recovery tooling is really meant to recover a broken
filesystem; massaging it to get erroneously-deleted files back into
the tree would be rough. The only way I can think of doing that is
using the procedure to recover into a new metadata pool, and
performing just the cephfs-data-scan bits (because recovering the
metadata would obviously delete all the files again). But then your
tree (while self-consistent) would look strange again with files that
are in old locations and things, so I wouldn't recommend it.
-Greg

> -Mike Sherman
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to