Wait. It seems to be my bad.

Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's why it uses 100% of CPU. I've re-tested it, waiting for CPU usage to become normal, and got no leaks.

Will verify this once again and report more.

BTW, if that works, how could I limit inode cache for FUSE client? I do not want it to go beyond 1G, for example, even if I have 48G of RAM on my server.

01.02.2016 09:54, Soumya Koduri написав:
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:
Unfortunately, this patch doesn't help.

RAM usage on "find" finish is ~9G.

Here is statedump before drop_caches: https://gist.github.com/
fc1647de0982ab447e20

[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
size=706766688
num_allocs=2454051


And after drop_caches: https://gist.github.com/5eab63bc13f78787ed19

[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
size=550996416
num_allocs=1913182

There isn't much significant drop in inode contexts. One of the
reasons could be because of dentrys holding a refcount on the inodes
which shall result in inodes not getting purged even after
fuse_forget.


pool-name=fuse:dentry_t
hot-count=32761

if  '32761' is the current active dentry count, it still doesn't seem
to match up to inode count.

Thanks,
Soumya

And here is Valgrind output: https://gist.github.com/2490aeac448320d98596

On субота, 30 січня 2016 р. 22:56:37 EET Xavier Hernandez wrote:
There's another inode leak caused by an incorrect counting of
lookups on directory reads.

Here's a patch that solves the problem for
3.7:

http://review.gluster.org/13324

Hopefully with this patch the
memory leaks should disapear.

Xavi

On 29.01.2016 19:09, Oleksandr

Natalenko wrote:
Here is intermediate summary of current memory

leaks in FUSE client

investigation.

I use GlusterFS v3.7.6

release with the following patches:
===

Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisited

Pranith Kumar K

(1):
mount/fuse: Fix use-after-free crash

Soumya Koduri (3):
gfapi: Fix inode nlookup counts

inode: Retire the inodes from the lru

list in inode_table_destroy

upcall: free the xdr* allocations
===


With those patches we got API leaks fixed (I hope, brief tests show

that) and

got rid of "kernel notifier loop terminated" message.

Nevertheless, FUSE

client still leaks.

I have several test

volumes with several million of small files (100K…2M in

average). I

do 2 types of FUSE client testing:
1) find /mnt/volume -type d
2)

rsync -av -H /mnt/source_volume/* /mnt/target_volume/

And most

up-to-date results are shown below:
=== find /mnt/volume -type d

===

Memory consumption: ~4G

Statedump:
https://gist.github.com/10cde83c63f1b4f1dd7a

Valgrind:
https://gist.github.com/097afb01ebb2c5e9e78d

I guess,

fuse-bridge/fuse-resolve. related.

=== rsync -av -H

/mnt/source_volume/* /mnt/target_volume/ ===

Memory consumption:
~3.3...4G

Statedump (target volume):
https://gist.github.com/31e43110eaa4da663435

Valgrind (target volume):
https://gist.github.com/f8e0151a6878cacc9b1a

I guess,

DHT-related.

Give me more patches to test :).

_______________________________________________

Gluster-devel mailing

list

Gluster-devel@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to