Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-04 Thread Raghavendra G
On Mon, Feb 1, 2016 at 2:24 PM, Soumya Koduri wrote: > > > On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: > >> Wait. It seems to be my bad. >> >> Before unmounting I do drop_caches (2), and glusterfs process CPU usage >> goes to 100% for a while. I haven't waited for it

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-02 Thread Xavier Hernandez
Hi Oleksandr, On 01/02/16 19:28, Oleksandr Natalenko wrote: Please take a look at updated test results. Test: find /mnt/volume -type d RAM usage after "find" finishes: ~ 10.8G (see "ps" output [1]). Statedump after "find" finishes: [2]. Then I did drop_caches, and RAM usage dropped to ~4.7G

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-02 Thread Oleksandr Natalenko
02.02.2016 10:07, Xavier Hernandez написав: Could it be memory used by Valgrind itself to track glusterfs' memory usage ? Could you repeat the test without Valgrind and see if the memory usage after dropping caches returns to low values ? Yup. Here are the results: === pf@server:~ » ps aux

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Oleksandr Natalenko
Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's why it uses 100% of CPU. I've re-tested it,

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de0982ab447e20 [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage] size=706766688

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Oleksandr Natalenko
Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de0982ab447e20 And after drop_caches: https://gist.github.com/5eab63bc13f78787ed19 And here is Valgrind output:

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-30 Thread Xavier Hernandez
There's another inode leak caused by an incorrect counting of lookups on directory reads. Here's a patch that solves the problem for 3.7: http://review.gluster.org/13324 Hopefully with this patch the memory leaks should disapear. Xavi On 29.01.2016 19:09, Oleksandr Natalenko wrote: >

[Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-29 Thread Oleksandr Natalenko
Here is intermediate summary of current memory leaks in FUSE client investigation. I use GlusterFS v3.7.6 release with the following patches: === Kaleb S KEITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash