Hello Ceph users and readers,

I am facing a really unknown situation for me (being kinda new) with a CephFS 
cluster (Squid 19.2.2, Cephadm) where the data pool is intact (7.2 TB used, 
4.9TB stored), inode objects exist in the recovered metadata pool, but when i 
try to mount the filesystem, its empty.

What initially happened is that I had two groups of drives: nvme’s and 
spinning. I had separate pools for both, and I eventually deleted the nvme’s 
pool and then removed the nvmes from the machine. I never thought that they had 
included metadata for spinning rust’s pool, but here we are.


Version: 19.2.2 Squid
Metadata pool ID 1 (cephfs.cephfs.meta), Size=1 (No Replicas :(, yes I know I 
messed up).
Data Pool ID 6 (ec_data), erasure coded with k=2 m=1.
All OSDs are up. 256 PGs are active+clean. there’s then quite a bit that are 
inconsistent, degraded, undersized, remapped, and other stuff.

Here’s what I’ve done since losing the metadata pool due to OSD purges. I tried 
to rebuild metadata from the data pool by doing:
1.  cephfs-data-scan init --force-init
2.  cephfs-data-scan scan_extents
3.  cephfs-data-scan scan_inodes



ceph df shows 7.2 TiB stored in pool ec_data (which is the spinning rust pool)
Inodes do exist. running rados -p cephfs.cephfs.meta ls | grep "100." returns 
tons of objects (ex. 1000005f1f3.00000000).
If i try to mount via ceph-fuse, it shows an empty root.  there is no 
lost+found folder either.

A big problem is that when I try to link the orphaned inodes to lost+found, the 
command finishes instantly with no output, as if it believes the cluster is 
clean.

I run: cephfs-data-scan scan_links --filesystem cephfs
# and it returns code 0, no output, and runs in <1 second

So my final question is since scan_inodes has populated the metadata pool with 
objects (100.xxxx), but scan_links ignores them, how do I force a re-scan or 
just do SOMETHING so scan_links will actually link these files into lost+found?


All The Best,
Enzo

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to