Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-12 Thread huting3






https://drive.google.com/file/d/1ZlttTzt4E56Qtk9j7b4I9GkZC2W3mJgp/view?usp=sharingHi exports:    I upload the statedump file to the google drive, could you please help me find what is the cause of gluster fuse comsuming huge memory? Thank you!
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 13:46,huting3 wrote: 







I upload the dump file as the attchment.



   










huting3







huti...@corp.netease.com








签名由
网易邮箱大师
定制

  



On 08/9/2018 13:30,huting3 wrote: 






em, the data set is complicated. There are many big files as well as small files. There is about 50T data in gluster server, so I do not know how many files in the dataset exactly. Can the inode cache consume so huge memory? How can I limit the inode cache?ps:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | wc -l191728When I dump the process info, the fuse process consumed about 30G memory.




   










huting3







huti...@corp.netease.com








签名由
网易邮箱大师
定制

  



On 08/9/2018 13:13,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 10:36 AM, huting3  wrote:







grep count will ouput nothing, so I grep size, the results are:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep sizexlator.mount.fuse.itable.lru_size=191726Kernel is holding too many inodes in its cache. What's the data set like? Do you've too many directories? How many files do you have? $ grep itable glusterdump.109182.dump.1533730324 | grep active | grep sizexlator.mount.fuse.itable.active_size=17
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 12:36,Raghavendra Gowdappa wrote: 


Can you get the output of following cmds?# grep itable  | grep lru | grep count# grep itable  | grep active | grep countOn Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:







Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge. [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37total_allocs=150049981
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 11:36,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:







Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory? My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread huting3






I have tried this, but the size of the file is huge and the mail was sent back. I uploaded the file to google drive and the link is:https://drive.google.com/file/d/1ZlttTzt4E56Qtk9j7b4I9GkZC2W3mJgp/view?usp=sharing



   










huting3







huti...@corp.netease.com








签名由
网易邮箱大师
定制

  



On 08/9/2018 13:48,Nithya Balachandran wrote: 


Is it possible for you to send us the statedump file? It will be easier than going back and forth over emails.Thanks,NithyaOn 9 August 2018 at 09:25, huting3  wrote:







Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge. [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37total_allocs=150049981​
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 11:36,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:







Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory? My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0Brick41: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Nithya Balachandran
Is it possible for you to send us the statedump file? It will be easier
than going back and forth over emails.

Thanks,
Nithya

On 9 August 2018 at 09:25, huting3  wrote:

> Yes, I got the dump file and found there are many huge num_allocs just
> like following:
>
> I found memusage of 4 variable types are extreamly huge.
>
>  [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
> size=47202352
> num_allocs=2030212
> max_size=47203074
> max_num_allocs=2030235
> total_allocs=26892201
>
> [protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
> size=24362448
> num_allocs=2030204
> max_size=24367560
> max_num_allocs=2030226
> total_allocs=17830860
>
> [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
> size=2497947552
> num_allocs=4578229
> max_size=2459135680
> max_num_allocs=7123206
> total_allocs=41635232
>
> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
> size=4038730976
> num_allocs=1
> max_size=4294962264
> max_num_allocs=37
> total_allocs=150049981​
> 
>
>
>
> huting3
> huti...@corp.netease.com
>
> 
> 签名由 网易邮箱大师  定制
>
> On 08/9/2018 11:36,Raghavendra Gowdappa
>  wrote:
>
>
>
> On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:
>
>> Hi expert:
>>
>> I meet a problem when I use glusterfs. The problem is that the fuse
>> client consumes huge memory when write a   lot of files(>million) to the
>> gluster, at last leading to killed by OS oom. The memory the fuse process
>> consumes can up to 100G! I wonder if there are memory leaks in the gluster
>> fuse process, or some other causes.
>>
>
> Can you get statedump of fuse process consuming huge memory?
>
>
>> My gluster version is 3.13.2, the gluster volume info is listed as
>> following:
>>
>> Volume Name: gv0
>> Type: Distributed-Replicate
>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 19 x 3 = 57
>> Transport-type: tcp
>> Bricks:
>> Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick44: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread huting3






em, the data set is complicated. There are many big files as well as small files. There is about 50T data in gluster server, so I do not know how many files in the dataset exactly. Can the inode cache consume so huge memory? How can I limit the inode cache?ps:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | wc -l191728When I dump the process info, the fuse process consumed about 30G memory.




   










huting3







huti...@corp.netease.com








签名由
网易邮箱大师
定制

  



On 08/9/2018 13:13,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 10:36 AM, huting3  wrote:







grep count will ouput nothing, so I grep size, the results are:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep sizexlator.mount.fuse.itable.lru_size=191726Kernel is holding too many inodes in its cache. What's the data set like? Do you've too many directories? How many files do you have? $ grep itable glusterdump.109182.dump.1533730324 | grep active | grep sizexlator.mount.fuse.itable.active_size=17
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 12:36,Raghavendra Gowdappa wrote: 


Can you get the output of following cmds?# grep itable  | grep lru | grep count# grep itable  | grep active | grep countOn Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:







Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge. [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37total_allocs=150049981
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 11:36,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:







Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory? My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0Brick24: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 10:43 AM, Raghavendra Gowdappa 
wrote:

>
>
> On Thu, Aug 9, 2018 at 10:36 AM, huting3  wrote:
>
>> grep count will ouput nothing, so I grep size, the results are:
>>
>> $ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep size
>> xlator.mount.fuse.itable.lru_size=191726
>>
>
> Kernel is holding too many inodes in its cache. What's the data set like?
> Do you've too many directories? How many files do you have?
>

Just to be sure, can you give the output of following cmd too:

# grep itable  | grep lru | wc -l


>
>> $ grep itable glusterdump.109182.dump.1533730324 | grep active | grep
>> size
>> xlator.mount.fuse.itable.active_size=17
>>
>>
>> huting3
>> huti...@corp.netease.com
>>
>> 
>> 签名由 网易邮箱大师  定制
>>
>> On 08/9/2018 12:36,Raghavendra Gowdappa
>>  wrote:
>>
>> Can you get the output of following cmds?
>>
>> # grep itable  | grep lru | grep count
>>
>> # grep itable  | grep active | grep count
>>
>> On Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:
>>
>>> Yes, I got the dump file and found there are many huge num_allocs just
>>> like following:
>>>
>>> I found memusage of 4 variable types are extreamly huge.
>>>
>>>  [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
>>> size=47202352
>>> num_allocs=2030212
>>> max_size=47203074
>>> max_num_allocs=2030235
>>> total_allocs=26892201
>>>
>>> [protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
>>> size=24362448
>>> num_allocs=2030204
>>> max_size=24367560
>>> max_num_allocs=2030226
>>> total_allocs=17830860
>>>
>>> [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
>>> size=2497947552
>>> num_allocs=4578229
>>> max_size=2459135680
>>> max_num_allocs=7123206
>>> total_allocs=41635232
>>>
>>> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
>>> size=4038730976
>>> num_allocs=1
>>> max_size=4294962264
>>> max_num_allocs=37
>>> total_allocs=150049981
>>> 
>>>
>>>
>>>
>>> huting3
>>> huti...@corp.netease.com
>>>
>>> 
>>> 签名由 网易邮箱大师  定制
>>>
>>> On 08/9/2018 11:36,Raghavendra Gowdappa
>>>  wrote:
>>>
>>>
>>>
>>> On Thu, Aug 9, 2018 at 8:55 AM, huting3 
>>> wrote:
>>>
 Hi expert:

 I meet a problem when I use glusterfs. The problem is that the fuse
 client consumes huge memory when write a   lot of files(>million) to the
 gluster, at last leading to killed by OS oom. The memory the fuse process
 consumes can up to 100G! I wonder if there are memory leaks in the gluster
 fuse process, or some other causes.

>>>
>>> Can you get statedump of fuse process consuming huge memory?
>>>
>>>
 My gluster version is 3.13.2, the gluster volume info is listed as
 following:

 Volume Name: gv0
 Type: Distributed-Replicate
 Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 19 x 3 = 57
 Transport-type: tcp
 Bricks:
 Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
 Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
 Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
 Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
 Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
 Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
 Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
 Brick20: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 10:36 AM, huting3  wrote:

> grep count will ouput nothing, so I grep size, the results are:
>
> $ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep size
> xlator.mount.fuse.itable.lru_size=191726
>

Kernel is holding too many inodes in its cache. What's the data set like?
Do you've too many directories? How many files do you have?


> $ grep itable glusterdump.109182.dump.1533730324 | grep active | grep size
> xlator.mount.fuse.itable.active_size=17
>
>
> huting3
> huti...@corp.netease.com
>
> 
> 签名由 网易邮箱大师  定制
>
> On 08/9/2018 12:36,Raghavendra Gowdappa
>  wrote:
>
> Can you get the output of following cmds?
>
> # grep itable  | grep lru | grep count
>
> # grep itable  | grep active | grep count
>
> On Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:
>
>> Yes, I got the dump file and found there are many huge num_allocs just
>> like following:
>>
>> I found memusage of 4 variable types are extreamly huge.
>>
>>  [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
>> size=47202352
>> num_allocs=2030212
>> max_size=47203074
>> max_num_allocs=2030235
>> total_allocs=26892201
>>
>> [protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
>> size=24362448
>> num_allocs=2030204
>> max_size=24367560
>> max_num_allocs=2030226
>> total_allocs=17830860
>>
>> [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
>> size=2497947552
>> num_allocs=4578229
>> max_size=2459135680
>> max_num_allocs=7123206
>> total_allocs=41635232
>>
>> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
>> size=4038730976
>> num_allocs=1
>> max_size=4294962264
>> max_num_allocs=37
>> total_allocs=150049981
>> 
>>
>>
>>
>> huting3
>> huti...@corp.netease.com
>>
>> 
>> 签名由 网易邮箱大师  定制
>>
>> On 08/9/2018 11:36,Raghavendra Gowdappa
>>  wrote:
>>
>>
>>
>> On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:
>>
>>> Hi expert:
>>>
>>> I meet a problem when I use glusterfs. The problem is that the fuse
>>> client consumes huge memory when write a   lot of files(>million) to the
>>> gluster, at last leading to killed by OS oom. The memory the fuse process
>>> consumes can up to 100G! I wonder if there are memory leaks in the gluster
>>> fuse process, or some other causes.
>>>
>>
>> Can you get statedump of fuse process consuming huge memory?
>>
>>
>>> My gluster version is 3.13.2, the gluster volume info is listed as
>>> following:
>>>
>>> Volume Name: gv0
>>> Type: Distributed-Replicate
>>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 19 x 3 = 57
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick25: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread huting3






grep count will ouput nothing, so I grep size, the results are:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep sizexlator.mount.fuse.itable.lru_size=191726$ grep itable glusterdump.109182.dump.1533730324 | grep active | grep sizexlator.mount.fuse.itable.active_size=17
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 12:36,Raghavendra Gowdappa wrote: 


Can you get the output of following cmds?# grep itable  | grep lru | grep count# grep itable  | grep active | grep countOn Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:







Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge. [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37total_allocs=150049981​
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 11:36,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:







Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory? My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0Brick50: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
Can you get the output of following cmds?

# grep itable  | grep lru | grep count

# grep itable  | grep active | grep count

On Thu, Aug 9, 2018 at 9:25 AM, huting3  wrote:

> Yes, I got the dump file and found there are many huge num_allocs just
> like following:
>
> I found memusage of 4 variable types are extreamly huge.
>
>  [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
> size=47202352
> num_allocs=2030212
> max_size=47203074
> max_num_allocs=2030235
> total_allocs=26892201
>
> [protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
> size=24362448
> num_allocs=2030204
> max_size=24367560
> max_num_allocs=2030226
> total_allocs=17830860
>
> [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
> size=2497947552
> num_allocs=4578229
> max_size=2459135680
> max_num_allocs=7123206
> total_allocs=41635232
>
> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
> size=4038730976
> num_allocs=1
> max_size=4294962264
> max_num_allocs=37
> total_allocs=150049981​
> 
>
>
>
> huting3
> huti...@corp.netease.com
>
> 
> 签名由 网易邮箱大师  定制
>
> On 08/9/2018 11:36,Raghavendra Gowdappa
>  wrote:
>
>
>
> On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:
>
>> Hi expert:
>>
>> I meet a problem when I use glusterfs. The problem is that the fuse
>> client consumes huge memory when write a   lot of files(>million) to the
>> gluster, at last leading to killed by OS oom. The memory the fuse process
>> consumes can up to 100G! I wonder if there are memory leaks in the gluster
>> fuse process, or some other causes.
>>
>
> Can you get statedump of fuse process consuming huge memory?
>
>
>> My gluster version is 3.13.2, the gluster volume info is listed as
>> following:
>>
>> Volume Name: gv0
>> Type: Distributed-Replicate
>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 19 x 3 = 57
>> Transport-type: tcp
>> Bricks:
>> Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
>> Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0
>> Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0
>> Brick44: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread huting3






Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge. [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37total_allocs=150049981​
huting3
huti...@corp.netease.com


签名由
网易邮箱大师
定制

  



On 08/9/2018 11:36,Raghavendra Gowdappa wrote: 


On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:







Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory? My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0Options Reconfigured:performance.cache-size: 10GBperformance.parallel-readdir: onperformance.readdir-ahead: onnetwork.inode-lru-limit: 20performance.md-cache-timeout: 600performance.cache-invalidation: onperformance.stat-prefetch: onfeatures.cache-invalidation-timeout: 600features.cache-invalidation: 

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 8:55 AM, huting3  wrote:

> Hi expert:
>
> I meet a problem when I use glusterfs. The problem is that the fuse client
> consumes huge memory when write a   lot of files(>million) to the gluster,
> at last leading to killed by OS oom. The memory the fuse process consumes
> can up to 100G! I wonder if there are memory leaks in the gluster fuse
> process, or some other causes.
>

Can you get statedump of fuse process consuming huge memory?


> My gluster version is 3.13.2, the gluster volume info is listed as
> following:
>
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 19 x 3 = 57
> Transport-type: tcp
> Bricks:
> Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0
> Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0
> Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0
> Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0
> Options Reconfigured:
> performance.cache-size: 10GB
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> network.inode-lru-limit: 20
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> features.inode-quota: off
> features.quota: off
> cluster.quorum-reads: on
> cluster.quorum-count: 2
> cluster.quorum-type: fixed
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.server-quorum-ratio: 51%
>
>
> huting3
> huti...@corp.netease.com
>
> 
> 签名由 网易邮箱大师  定制
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> 

[Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread huting3






Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0Options Reconfigured:performance.cache-size: 10GBperformance.parallel-readdir: onperformance.readdir-ahead: onnetwork.inode-lru-limit: 20performance.md-cache-timeout: 600performance.cache-invalidation: onperformance.stat-prefetch: onfeatures.cache-invalidation-timeout: 600features.cache-invalidation: onfeatures.inode-quota: offfeatures.quota: offcluster.quorum-reads: oncluster.quorum-count: 2cluster.quorum-type: fixedtransport.address-family: inetnfs.disable: onperformance.client-io-threads: offcluster.server-quorum-ratio: 51%



   










huting3







huti...@corp.netease.com








签名由
网易邮箱大师
定制

  





___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel