Hi Zheng,

I did some more tests with cephfs-table-tool. I realized that disaster recovery implies to possibly reset inode table completely besides doing a session reset using something like

cephfs-table-tool all reset inode

Would that be close to what you suggested? Is it safe to reset complete inode table or will that wipe my file system?

Btw. cephfs-table-tool show reset inode gives me ~400k inodes, part of them in section 'free', part of them in section 'projected_free'.

Thanks,
Tobi





On 12/11/2017 04:28 PM, Yan, Zheng wrote:
On Mon, Dec 11, 2017 at 11:17 PM, Tobias Prousa <tobias.pro...@caetec.de> wrote:
These are essentially the first commands I did execute, in this exact order.
Additionally I did a:

ceph fs reset cephfs --yes-i-really-mean-it

how many active mds were there before the upgrading.

Any hint on how to find max inode number and do I understand that I should
remove every free-marked inode number that is there except the biggest one
which has to stay?
If you are not sure, you can just try removing 10000 inode numbers
from inodetale

How to remove those inodes using cephfs-table-tool?

using cephfs-table-tool take_inos <max ino>

--
-----------------------------------------------------------
Dipl.-Inf. (FH) Tobias Prousa
Leiter Entwicklung Datenlogger

CAETEC GmbH
Industriestr. 1
D-82140 Olching
www.caetec.de

Gesellschaft mit beschränkter Haftung
Sitz der Gesellschaft: Olching
Handelsregister: Amtsgericht München, HRB 183929
Geschäftsführung: Stephan Bacher, Andreas Wocke

Tel.: +49 (0)8142 / 50 13 60
Fax.: +49 (0)8142 / 50 13 69

eMail: tobias.pro...@caetec.de
Web:   http://www.caetec.de
------------------------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to