On 09/02/2016 05:42 AM, Keiviw wrote:
Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the nodeB, it didn't cache any metadata because the memory usage didn't change. So I was confused that why did the client not cache dentries and inodes.

If you only want to test fuse's caching, I would try mounting the volume on a separate machine (not on the brick node itself), disable all gluster performance xlators, do a find.|xargs stat on the mount 2 times in succession and see what free(1) reports the 1st and 2nd time. You could do this experiment with various attr/entry timeout values. Make sure your volume has a lot of small files.
-Ravi


在 2016-09-01 16:37:00,"Ravishankar N" <[email protected]> 写道:

    On 09/01/2016 01:04 PM, Keiviw wrote:
    Hi,
        I have found that GlusterFS client(mounted by FUSE) didn't
    cache metadata like dentries and inodes. I have installed
    GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was
    in nodeA, then in nodeB, I mounted the volume to /mnt/glusterfs
    by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB,
    and found that the memory didn't use at all. Here are my questions:
        1. In fuse kernel, the author set some attributes to control
    the time-out about dentry and inode, in other words, the fuse
    kernel supports metadata cache, but in my test, dentries and
    inodes were not cached. WHY?
        2. Were there some options in GlusterFS mounted to local to
    enable the metadata cache in fuse kernel?


    You can tweak the attribute-timeout and entry-timeout seconds
    while mounting the volume. Default is 1 second for both.  `man
    mount.glusterfs` lists various mount options.
    -Ravi



    _______________________________________________
    Gluster-devel mailing list
    [email protected]
    http://www.gluster.org/mailman/listinfo/gluster-devel





_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to