Re: [ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Tobias Prousa
Thank you very much! I feel optimistic that now I got what I need to get that thing back working again. I'll report back... Best regards, Tobi On 12/12/2017 02:08 PM, Yan, Zheng wrote: On Tue, Dec 12, 2017 at 8:29 PM, Tobias Prousa wrote: Hi Zheng, the more you tell me the more what I se

Re: [ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Yan, Zheng
On Tue, Dec 12, 2017 at 8:29 PM, Tobias Prousa wrote: > Hi Zheng, > > the more you tell me the more what I see begins to makes sens to me. Thank > you very much. > > Could you please be a little more verbose about how to use rados rmomapky? > What to use for and what to use for <>. Here is what m

Re: [ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Tobias Prousa
Hi Zheng, the more you tell me the more what I see begins to makes sens to me. Thank you very much. Could you please be a little more verbose about how to use rados rmomapky? What to use for and what to use for <>. Here is what my dir_frag looks like:     {     "damage_type": "dir_frag"

Re: [ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Yan, Zheng
On Tue, Dec 12, 2017 at 4:22 PM, Tobias Prousa wrote: > Hi there, > > regarding my ML post from yesterday (Upgrade from 12.2.1 to 12.2.2 broke my > CephFs) I was able to get a little further with the suggested > "cephfs-table-tool take_inos ". This made the whole issue with > loads of "falsely fre

Re: [ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Yan, Zheng
On Tue, Dec 12, 2017 at 4:22 PM, Tobias Prousa wrote: > Hi there, > > regarding my ML post from yesterday (Upgrade from 12.2.1 to 12.2.2 broke my > CephFs) I was able to get a little further with the suggested > "cephfs-table-tool take_inos ". This made the whole issue with > loads of "falsely fre

[ceph-users] Production 12.2.2 CephFS Cluster still broken, new Details

2017-12-12 Thread Tobias Prousa
Hi there, regarding my ML post from yesterday (Upgrade from 12.2.1 to 12.2.2 broke my CephFs) I was able to get a little further with the suggested "cephfs-table-tool take_inos ". This made the whole issue with loads of "falsely free-marked inodes" go away. I then restarted MDS, kept all cli