Thanks John.

Very likely, note that mds_mem::ino + mds_cache::strays_created ~=
mds::inodes, plus the MDS was the active-standby one, and become
active days ago due to failover.

mds": {
        "inodes": 1291393,
}
"mds_cache": {
        "num_strays": 3559,
        "strays_created": 706120,
        "strays_purged": 702561
}
"mds_mem": {
        "ino": 584974,
}

I do have a cache dump from the mds via admin socket,  is there
anything I can get from it  to make 100% percent sure?


Xiaoxi

2017-03-07 22:20 GMT+08:00 John Spray <jsp...@redhat.com>:
> On Tue, Mar 7, 2017 at 9:17 AM, Xiaoxi Chen <superdebu...@gmail.com> wrote:
>> Hi,
>>
>>       From the admin socket of mds, I got following data on our
>> production cephfs env, roughly we have 585K inodes and almost same
>> amount of caps, but we have>2x dentries than inodes.
>>
>>       I am pretty sure we dont use hard link intensively (if any).
>> And the #ino match with "rados ls --pool $my_data_pool}.
>>
>>       Thanks for any explanations, appreciate it.
>>
>>
>> "mds_mem": {
>>         "ino": 584974,
>>         "ino+": 1290944,
>>         "ino-": 705970,
>>         "dir": 25750,
>>         "dir+": 25750,
>>         "dir-": 0,
>>         "dn": 1291393,
>>         "dn+": 1997517,
>>         "dn-": 706124,
>>         "cap": 584560,
>>         "cap+": 2657008,
>>         "cap-": 2072448,
>>         "rss": 24599976,
>>         "heap": 166284,
>>         "malloc": 18446744073708721289,
>>         "buf": 0
>>     },
>>
>
> One possibility is that you have many "null" dentries, which are
> created when we do a lookup and a file is not found -- we create a
> special dentry to remember that that filename does not exist, so that
> we can return ENOENT quickly next time.  On pre-Kraken versions, null
> dentries can also be left behind after file deletions when the
> deletion is replayed on a standbyreplay MDS
> (http://tracker.ceph.com/issues/16919)
>
> John
>
>
>
>>
>> Xiaoxi
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to