On 08/05/2014 09:16 PM, Ryan Clough wrote:
I spoke too soon. Runing "ls -lR" on one of our largest directory structures overnight has caused glusterfs to use lots of memory. It appears as though glusterfs process is still gradually consuming more and more memory. I have tried to release the memory forcefully by issuing this command:
sync; echo 3 > /proc/sys/vm/drop_caches
But glusterfs holds on to its memory. The high memory is also expressed on the client side as well as on the server side. Right now both of my brick servers are using about 7GB of RAM for the glusterfs process and the client that is running the "ls -lR" is using about 8GB of RAM. Below are some basic specifications of my hardware. Both server and client are running version 3.5.2. I have attached a statedump of the client glusterfs.
Could you please send out statedumps please. That should help us figure out what the problem is.

Pranith

Brick server hardware:
Dual 6-core Intel Xeon CPU E5-2620 0 @ 2.00GHz (HT is on)
32GB SDRAM
2 - 500GB SATA drives in RAID1 for OS
12 - 3TB SATA drives in RAID6 with LVM and XFS for data


Client hardware:
Dual 8-core AMD OpteronProcessor 6128
32GB SDRAM

Ryan Clough
Information Systems
Decision Sciences International Corporation <http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>


On Mon, Aug 4, 2014 at 12:07 PM, Ryan Clough <[email protected] <mailto:[email protected]>> wrote:

    Hi,
    I too was experiencing this issue on my bricks. I am using version
    3.5.2 and after setting io-cache and quick-read to "off", as
    Poornima suggested, I am no longer seeing glusterfs gobbling
    memory. I noticed it first when I enabled quotas and during the
    quota-crawl glusterfs process would be OOM killed by the kernel.
    Before, my bricks would consume all available memory until swap
    was exhausted and the kernel OOMs the glusterfs process. There is
    a rebalance running right now and glusterfs is behaving. Here is
    some output of my current config. Let me know if I can provide
    anything else to help.

    [root@tgluster01 ~]# gluster volume status all detail
    Status of volume: tgluster_volume
    
------------------------------------------------------------------------------
    Brick                : Brick tgluster01:/gluster_data
    Port                 : 49153
    Online               : Y
    Pid                  : 2407
    File System          : xfs
    Device               : /dev/mapper/vg_data-lv_data
    Mount Options        :
    rw,noatime,nodiratime,logbufs=8,logbsize=256k,inode64,nobarrier
    Inode Size           : 512
    Disk Space Free      : 3.5TB
    Total Disk Space     : 27.3TB
    Inode Count          : 2929685696
    Free Inodes          : 2863589912
    
------------------------------------------------------------------------------
    Brick                : Brick tgluster02:/gluster_data
    Port                 : 49152
    Online               : Y
    Pid                  : 2402
    File System          : xfs
    Device               : /dev/mapper/vg_data-lv_data
    Mount Options        :
    rw,noatime,nodiratime,logbufs=8,logbsize=256k,inode64,nobarrier
    Inode Size           : 512
    Disk Space Free      : 5.4TB
    Total Disk Space     : 27.3TB
    Inode Count          : 2929685696
    Free Inodes          : 2864874648

    [root@tgluster01 ~]# gluster volume status
    Status of volume: tgluster_volume
    Gluster process                        Port    Online Pid
    
------------------------------------------------------------------------------
    Brick tgluster01:/gluster_data                49153 Y    2407
    Brick tgluster02:/gluster_data                49152 Y    2402
    Quota Daemon on localhost                N/A    Y    2415
    Quota Daemon on tgluster02                N/A    Y    2565

    Task Status of Volume tgluster_volume
    
------------------------------------------------------------------------------
    Task                 : Rebalance
    ID                   : 31fd1edb-dd6d-4c25-b4b5-1ce7bc0670f3
    Status               : in progress

    [root@tgluster01 ~]# gluster volume info
    Volume Name: tgluster_volume
    Type: Distribute
    Volume ID: 796774f8-f9ec-476c-9d08-0f5f937d5ad9
    Status: Started
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: tgluster01:/gluster_data
    Brick2: tgluster02:/gluster_data
    Options Reconfigured:
    features.quota-deem-statfs: on
    performance.client-io-threads: on
    performance.md-cache-timeout: 1
    performance.cache-max-file-size: 10MB
    network.ping-timeout: 60
    performance.write-behind-window-size: 4MB
    performance.read-ahead: on
    performance.cache-refresh-timeout: 1
    performance.cache-size: 10GB
    performance.quick-read: off
    nfs.disable: on
    features.quota: on
    performance.io-thread-count: 24
    cluster.eager-lock: on
    server.statedump-path: /var/log/glusterfs/
    performance.flush-behind: on
    performance.write-behind: on
    performance.stat-prefetch: on
    performance.io-cache: off

    [root@tgluster01 ~]# gluster volume status all mem
    Memory status for volume : tgluster_volume
    ----------------------------------------------
    Brick : tgluster01:/gluster_data
    Mallinfo
    --------
    Arena    : 25788416
    Ordblks  : 7222
    Smblks   : 1
    Hblks    : 12
    Hblkhd   : 16060416
    Usmblks  : 0
    Fsmblks  : 80
    Uordblks : 25037744
    Fordblks : 750672
    Keepcost : 132816

    Mempool Stats
    -------------
    Name                            HotCount ColdCount PaddedSizeof
    AllocCount MaxAlloc   Misses Max-StdAlloc
    ----                            -------- --------- ------------
    ---------- -------- -------- ------------
tgluster_volume-server:fd_t 11 1013 108 194246 22 0 0 tgluster_volume-server:dentry_t 16384 0 84 1280505 16384 481095 32968 tgluster_volume-server:inode_t 16383 1 156 13974240 16384 7625153 39688 tgluster_volume-changelog:changelog_local_t 0 64 108 0 0 0 0 tgluster_volume-locks:pl_local_t 0 32 148 3922857 4 0 0 tgluster_volume-marker:marker_local_t 0 128 332 6163938 8 0 0 tgluster_volume-quota:struct saved_frame 0 16 124 65000 6 0 0 tgluster_volume-quota:struct rpc_req 0 16 588 65000 6 0 0 tgluster_volume-quota:quota_local_t 0 64 404 4476051 8 0 0 tgluster_volume-server:rpcsvc_request_t 0 512 2828 6694494 8 0 0 glusterfs:struct saved_frame 0 8 124 2 2 0 0 glusterfs:struct rpc_req 0 8 588 2 2 0 0 glusterfs:rpcsvc_request_t 1 7 2828 2 1 0 0 glusterfs:data_t 164 16219 52 60680465 2012 0 0 glusterfs:data_pair_t 159 16224 68 34718980 1348 0 0 glusterfs:dict_t 15 4081 140 24689263 714 0 0 glusterfs:call_stub_t 0 1024 3756 8263013 9 0 0 glusterfs:call_stack_t 1 1023 1836 6675669 8 0 0 glusterfs:call_frame_t 0 4096 172 55532603 251 0 0
    ----------------------------------------------
    Brick : tgluster02:/gluster_data
    Mallinfo
    --------
    Arena    : 18714624
    Ordblks  : 4211
    Smblks   : 1
    Hblks    : 12
    Hblkhd   : 16060416
    Usmblks  : 0
    Fsmblks  : 80
    Uordblks : 18250608
    Fordblks : 464016
    Keepcost : 131360

    Mempool Stats
    -------------
    Name                            HotCount ColdCount PaddedSizeof
    AllocCount MaxAlloc   Misses Max-StdAlloc
    ----                            -------- --------- ------------
    ---------- -------- -------- ------------
tgluster_volume-server:fd_t 11 1013 108 155373 22 0 0 tgluster_volume-server:dentry_t 16383 1 84 1297732 16384 396012 21124 tgluster_volume-server:inode_t 16384 0 156 13896002 16384 7434842 24494 tgluster_volume-changelog:changelog_local_t 0 64 108 0 0 0 0 tgluster_volume-locks:pl_local_t 2 30 148 5578625 17 0 0 tgluster_volume-marker:marker_local_t 3 125 332 6834019 68 0 0 tgluster_volume-quota:struct saved_frame 0 16 124 64922 10 0 0 tgluster_volume-quota:struct rpc_req 0 16 588 65000 10 0 0 tgluster_volume-quota:quota_local_t 3 61 404 4216852 64 0 0 tgluster_volume-server:rpcsvc_request_t 3 509 2828 6406870 64 0 0 glusterfs:struct saved_frame 0 8 124 2 2 0 0 glusterfs:struct rpc_req 0 8 588 2 2 0 0 glusterfs:rpcsvc_request_t 1 7 2828 2 1 0 0 glusterfs:data_t 185 16198 52 80402618 1427 0 0 glusterfs:data_pair_t 177 16206 68 40014499 737 0 0 glusterfs:dict_t 18 4078 140 35345779 729 0 0 glusterfs:call_stub_t 3 1021 3756 21374090 68 0 0 glusterfs:call_stack_t 4 1020 1836 6824400 68 0 0 glusterfs:call_frame_t 20 4076 172 97255627 388 0 0
    ----------------------------------------------

    Ryan Clough
    Information Systems
    Decision Sciences International Corporation
    <http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>


    On Sun, Aug 3, 2014 at 11:36 PM, Poornima Gurusiddaiah
    <[email protected] <mailto:[email protected]>> wrote:

        Hi,

        From the statedump it is evident that the iobufs are leaking.
        Also the hot count of the
        pool-name=w-vol-io-cache:rbthash_entry_t is 10053, implies
        io-cache xlator could be the cause of the leak.
        From the logs, it looks like, quick-read performance xlator is
        calling iobuf_free with NULL pointers, implies quick-read
        could be leaking iobufs as well.

        As a temperory solution, could you disable io-cache and/or
        quick-read and see if the leak still persists?

        $gluster volume set io-cache off
        $gluster volume set quick-read off

        This may reduce the performance to certain extent.

        For further debugging, could you provide the core dump or
        steps to reproduce if avaiable?

        Regards,
        Poornima

        ----- Original Message -----
        From: "Tamas Papp" <[email protected]
        <mailto:[email protected]>>
        To: "Poornima Gurusiddaiah" <[email protected]
        <mailto:[email protected]>>
        Cc: [email protected] <mailto:[email protected]>
        Sent: Sunday, August 3, 2014 10:33:17 PM
        Subject: Re: [Gluster-users] high memory usage of mount


        On 07/31/2014 09:17 AM, Tamas Papp wrote:
        >
        > On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
        >> Hi,
        >
        > hi,
        >
        >> Can you provide the statedump of the process, it can be
        obtained as
        >> follows:
        >> $ gluster --print-statedumpdir  #create this directory if
        it doesn't
        >> exist.
        >> $ kill -USR1 <pid-of-glusterfs-process>  #generates state dump.
        >
        > http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip
        >
        >> Also, xporting Gluster via Samba-VFS-plugin method is
        preferred over
        >> Fuse mount export. For more details refer to:
        >>
        
http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
        >>
        >
        > When I tried it about half year ago it didn't work properly.
        Clients
        > lost mounts, access errors etc.
        >
        > But I will give it a try, though it's not included in
        ubuntu's samba
        > AFAIK.
        >
        >
        > Thank you,
        > tamas
        >
        > ps. I forget to mention, I can see this issue only one node.
        The rest
        > of nodes are fine.

        hi Poornima,

        Do you have  idea, what's going on here?

        Thanks,
        tamas
        _______________________________________________
        Gluster-users mailing list
        [email protected] <mailto:[email protected]>
        http://supercolony.gluster.org/mailman/listinfo/gluster-users




This communication is intended only for the person(s) to whom it is addressed and may contain confidential and/or privileged information. Any review, re-transmission, dissemination, copying or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient(s) is prohibited. If you received this communication in error, please report the error to the sender by return email and delete this communication from your records.


_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to