Re: [ceph-users] MDS corruption

2019-08-14 Thread ☣Adam
I was able to get this resolved, thanks again to Pierre Dittes! The reason the recovery did not work the first time I tried it was because I still had the filesystem mounted (or at least attempted to have it mounted). This was causing sessions to be active. After rebooting all the machines

Re: [ceph-users] MDS corruption

2019-08-13 Thread Yan, Zheng
nautilus version (14.2.2) of ‘cephfs-data-scan scan_links’ can fix snaptable. hopefully it will fix your issue. you don't need to upgrade whole cluster. Just install nautilus in a temp machine or compile ceph from source. On Tue, Aug 13, 2019 at 2:35 PM Adam wrote: > > Pierre Dittes helped

Re: [ceph-users] MDS corruption

2019-08-13 Thread ☣Adam
Pierre Dittes helped me with adding --rank=yourfsname:all and I ran the following steps from the disaster recovery page: journal export, dentry recovery, journal truncation, mds table wipes (session, snap and inode), scan_extents, scan_inodes, scan_links, and cleanup. Now all three of my MDS

[ceph-users] MDS corruption

2019-08-08 Thread ☣Adam
I had a machine with insufficient memory and it seems to have corrupted data on my MDS. The filesystem seems to be working fine, with the exception of accessing specific files. The ceph-mds logs include things like: mds.0.1596621 unhandled write error (2) No such file or directory, force