Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
I perform the tests using 1) rsync (massive copy of millions of files); 2) find (simple tree traversing). To check if memory leak happens, I use find tool. I've performed two traversing (w/ and w/o fopen-keep-cache=off) with remount between them, but I didn't encounter "kernel notifier loop

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Xavier Hernandez
If this message appears way before the volume is unmounted, can you try to start the volume manually using this command and repeat the tests ? glusterfs --fopen-keep-cache=off --volfile-server= --volfile-id=/ This will prevent invalidation requests to be sent to the kernel, so there

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-21 Thread Raghavendra G
On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote: > >> XFS. Server side works OK, I'm able to mount volume again. Brick is 30% >> full. >> > > Oleksandr, > Will it be possible to get the statedump

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
Hello, Pranith. The typical behavior is that the %cpu on a glusterfsd process jumps to number of processor cores available (800% or 1200%, depending on the pair of nodes involved) and the load average on the machine goes very high (~20). The volume's heal statistics output shows that it is

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/21/2016 08:25 PM, Glomski, Patrick wrote: Hello, Pranith. The typical behavior is that the %cpu on a glusterfsd process jumps to number of processor cores available (800% or 1200%, depending on the pair of nodes involved) and the load average on the machine goes very high (~20). The

Re: [Gluster-devel] Netbsd regressions are failing because of connection problems?

2016-01-21 Thread Michael Scherer
Le jeudi 21 janvier 2016 à 06:05 +0530, Pranith Kumar Karampuri a écrit : > /origin/* > ERROR: Error cloning remote repo 'origin' > hudson.plugins.git.GitException: Command "git -c core.askpass=true fetch > --tags --progress git://review.gluster.org/glusterfs.git >

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
I should mention that the problem is not currently occurring and there are no heals (output appended). By restarting the gluster services, we can stop the crawl, which lowers the load for a while. Subsequent crawls seem to finish properly. For what it's worth, files/folders that show up in the

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/21/2016 09:26 PM, Glomski, Patrick wrote: I should mention that the problem is not currently occurring and there are no heals (output appended). By restarting the gluster services, we can stop the crawl, which lowers the load for a while. Subsequent crawls seem to finish properly. For

Re: [Gluster-devel] Netbsd regressions are failing because of connection problems?

2016-01-21 Thread Emmanuel Dreyfus
On Thu, Jan 21, 2016 at 04:49:28PM +0100, Michael Scherer wrote: > > review.gluster.org[0: 184.107.76.10]: errno=Connection refused > > SO I found nothing in gerrit nor netbsd. ANd not the DNS, since it > managed to resolve stuff fine. > > I suspect the problem was on gerrit, nor on netbsd. Did

[Gluster-devel] Closing bugs filed against the mainline/devel version earlier

2016-01-21 Thread Niels de Vos
Hi Naga, a while back you closed a bunch of bugs that roughly matched this criteria: - select mainline bug to potentially close - is there a cloned bug to a stable release (backport) - is the backport marked as CLOSED/CURRENTRELEASE -> close the mainline bug as CLOSED/NEXTRELEASE (from

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
I see extra GF_FREE (node); added with two patches: === $ git diff HEAD~2 | gist https://gist.github.com/9524fa2054cc48278ea8 === Is that intentionally? I guess I face double-free issue. On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote: > On 01/20/2016 04:08 AM, Oleksandr Natalenko

Re: [Gluster-devel] Netbsd regressions are failing because of connection problems?

2016-01-21 Thread Michael Scherer
Le jeudi 21 janvier 2016 à 16:07 +, Emmanuel Dreyfus a écrit : > On Thu, Jan 21, 2016 at 04:49:28PM +0100, Michael Scherer wrote: > > > review.gluster.org[0: 184.107.76.10]: errno=Connection refused > > > > SO I found nothing in gerrit nor netbsd. ANd not the DNS, since it > > managed to

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
With the proposed patches I get the following assertion while copying files to GlusterFS volume: === glusterfs: mem-pool.c:305: __gf_free: Assertion `0xCAFEBABE == header->magic' failed. Program received signal SIGABRT, Aborted. [Switching to Thread 0x7fffe9ffb700 (LWP 12635)]

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Kaleb KEITHLEY
On 01/20/2016 04:08 AM, Oleksandr Natalenko wrote: > Yes, there are couple of messages like this in my logs too (I guess one > message per each remount): > > === > [2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0- > glusterfs-fuse: kernel notifier loop terminated > === >

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/22/2016 12:51 AM, Glomski, Patrick wrote: Pranith, could this kind of behavior be self-inflicted by us deleting files directly from the bricks? We have done that in the past to clean up an issues where gluster wouldn't allow us to delete from the mount. Not sure. I haven't seen many

[Gluster-devel] 答复: Re: Gluster AFR volume write performance has been seriously affected by GLUSTERFS_WRITE_IS_APPEND in afr_writev

2016-01-21 Thread li . ping288
Hi Pranith, it is appreciated for your reply. Pranith Kumar Karampuri 写于 2016/01/20 18:51:19: > 发件人: Pranith Kumar Karampuri > 收件人: li.ping...@zte.com.cn, gluster-devel@gluster.org, > 日期: 2016/01/20 18:51 > 主题: Re: [Gluster-devel] Gluster AFR

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
Unfortunately, all samba mounts to the gluster volume through the gfapi vfs plugin have been disabled for the last 6 hours or so and frequency of %cpu spikes is increased. We had switched to sharing a fuse mount through samba, but I just disabled that as well. There are no samba shares of this

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/22/2016 07:19 AM, Pranith Kumar Karampuri wrote: On 01/22/2016 07:13 AM, Glomski, Patrick wrote: We use the samba glusterfs virtual filesystem (the current version provided on download.gluster.org ), but no windows clients connecting directly. Hmm.. Is

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Kaleb KEITHLEY
On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote: > I see extra GF_FREE (node); added with two patches: > > === > $ git diff HEAD~2 | gist > https://gist.github.com/9524fa2054cc48278ea8 > === > > Is that intentionally? I guess I face double-free issue. > I presume you're referring to the

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/22/2016 07:13 AM, Glomski, Patrick wrote: We use the samba glusterfs virtual filesystem (the current version provided on download.gluster.org ), but no windows clients connecting directly. Hmm.. Is there a way to disable using this and check if the CPU%

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
Samba version is 4.1.17 that you guys maintain at download.gluster.org. The vfs plugin comes packaged with it. http://download.gluster.org/pub/gluster/glusterfs/samba/EPEL.repo/epel-6/x86_64/ # smbd --version Version 4.1.17 # rpm -qa | grep samba-vfs-glusterfs

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
Do you have any windows clients? I see a lot of getxattr calls for "glusterfs.get_real_filename" which lead to full readdirs of the directories on the brick. Pranith On 01/22/2016 12:51 AM, Glomski, Patrick wrote: Pranith, could this kind of behavior be self-inflicted by us deleting files

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
We use the samba glusterfs virtual filesystem (the current version provided on download.gluster.org), but no windows clients connecting directly. On Thu, Jan 21, 2016 at 8:37 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > Do you have any windows clients? I see a lot of getxattr

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/22/2016 07:25 AM, Glomski, Patrick wrote: Unfortunately, all samba mounts to the gluster volume through the gfapi vfs plugin have been disabled for the last 6 hours or so and frequency of %cpu spikes is increased. We had switched to sharing a fuse mount through samba, but I just

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Raghavendra Talur
On Jan 22, 2016 7:27 AM, "Pranith Kumar Karampuri" wrote: > > > > On 01/22/2016 07:19 AM, Pranith Kumar Karampuri wrote: >> >> >> >> On 01/22/2016 07:13 AM, Glomski, Patrick wrote: >>> >>> We use the samba glusterfs virtual filesystem (the current version provided on

Re: [Gluster-devel] Netbsd regressions are failing because of connection problems?

2016-01-21 Thread Emmanuel Dreyfus
Michael Scherer wrote: > Depend, if they exhausted FD or something ? I am not a java specialist. It is not the same errno, AFAIK. > Could also just be too long to answer due to the load, but it was not > loaded :/ High loads give timeouts. I may be wrong, but I beleive

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Pranith Kumar Karampuri
On 01/22/2016 07:25 AM, Glomski, Patrick wrote: Unfortunately, all samba mounts to the gluster volume through the gfapi vfs plugin have been disabled for the last 6 hours or so and frequency of %cpu spikes is increased. We had switched to sharing a fuse mount through samba, but I just

Re: [Gluster-devel] Jenkins accounts for all devs.

2016-01-21 Thread Ravishankar N
On 01/14/2016 12:16 PM, Kaushal M wrote: On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur wrote: On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N wrote: On 01/08/2016 12:03 PM, Raghavendra Talur wrote: P.S: Stop using the "universal" jenkins

Re: [Gluster-devel] Jenkins accounts for all devs.

2016-01-21 Thread Mohammed Rafi K C
On 01/22/2016 11:31 AM, Ravishankar N wrote: > On 01/14/2016 12:16 PM, Kaushal M wrote: >> On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur >> wrote: >>> >>> On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N >>> >>> wrote: On 01/08/2016 12:03

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Poornima Gurusiddaiah
Hi, As mentioned glusterfs.get_real_filename getxattr is called when we need to check if a file(case insensitive) exists in a directory. You could run the following command to get to know the perf details. I guess you need to have debuginfo installed for this to work. Record perf: $perf

Re: [Gluster-devel] [Gluster-users] heal hanging

2016-01-21 Thread Glomski, Patrick
Pranith, could this kind of behavior be self-inflicted by us deleting files directly from the bricks? We have done that in the past to clean up an issues where gluster wouldn't allow us to delete from the mount. If so, is it feasible to clean them up by running a search on the .glusterfs