Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-04 Thread Artem Russakovskii
The fuse crash happened two more times, but this time monit helped recover within 1 minute, so it's a great workaround for now. What's odd is that the crashes are only happening on one of 4 servers, and I don't know why. Sincerely, Artem -- Founder, Android Police

Re: [Gluster-users] 0-epoll: Failed to dispatch handler

2019-02-04 Thread Raghavendra Gowdappa
On Mon, Feb 4, 2019 at 8:18 PM Dieter Molketin < dieter.molke...@deutsche-telefon.de> wrote: > After upgrade from glusterfs 3.12 to version 5.3 I see following error > message in all logfiles multiple times: > > [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to > dispatch handler

[Gluster-users] 0-epoll: Failed to dispatch handler

2019-02-04 Thread Dieter Molketin
After upgrade from glusterfs 3.12 to version 5.3 I see following error message in all logfiles multiple times: [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler Also a fresh installation with glusterfs 5.3 produces this error message over and over again. What

Re: [Gluster-users] gluster remove-brick

2019-02-04 Thread mohammad kashif
Hi Nithya I tried attching the logs but it was tool big. So I have put it on one drive accessible by everyone https://drive.google.com/drive/folders/1744WcOfrqe_e3lRPxLpQ-CBuXHp_o44T?usp=sharing I am attaching rebalance-logs which is for the period when I ran fix-layout after adding new disk

Re: [Gluster-users] gluster remove-brick

2019-02-04 Thread Nithya Balachandran
Hi, On Mon, 4 Feb 2019 at 16:39, mohammad kashif wrote: > Hi Nithya > > Thanks for replying so quickly. It is very much appreciated. > > There are lots if " [No space left on device] " errors which I can not > understand as there are much space on all of the nodes. > This means that Gluster

Re: [Gluster-users] Help analise statedumps

2019-02-04 Thread Pedro Costa
Hi Sanju, If it helps, here’s also a statedump (taken just now) since the reboot’s: https://pmcdigital.sharepoint.com/:u:/g/EbsT2RZsuc5BsRrf7F-fw-4BocyeogW-WvEike_sg8CpZg?e=a7nTqS Many thanks, P. From: Pedro Costa Sent: 04 February 2019 10:12 To: 'Sanju Rakonde' Cc: gluster-users Subject:

Re: [Gluster-users] Corrupted File readable via FUSE?

2019-02-04 Thread David Spisla
Hello Amar, sounds good. Until now this patch is only merged into master. I think it should be part of the next v5.x patch release! Regards David Am Mo., 4. Feb. 2019 um 09:58 Uhr schrieb Amar Tumballi Suryanarayan < atumb...@redhat.com>: > Hi David, > > I guess

Re: [Gluster-users] gluster remove-brick

2019-02-04 Thread mohammad kashif
Hi Nithya Thanks for replying so quickly. It is very much appreciated. There are lots if " [No space left on device] " errors which I can not understand as there are much space on all of the nodes. A little bit of background will be useful in this case. I had cluster of seven nodes of varying

[Gluster-users] Memory management, OOM kills and glusterfs

2019-02-04 Thread Raghavendra Gowdappa
All, Me, Csaba and Manoj are presenting our experiences with using FUSE as an interface for Glusterfs at Vault'19 [1]. One of the areas Glusterfs has faced difficulties is with memory management. One of the reasons for high memory consumption has been the amount of memory consumed by glusterfs

[Gluster-users] BOF Session - FOSDEM Today

2019-02-04 Thread Armin Weißer
To all the Gluster folks @ FOSDEM: there will be a great BOF Session today. Where: Room H.3242 When: 11:30 am What: Gluster Performance - Status Quo and Best Practices Hope to see you there! Cheers, Armin ___ Gluster-users mailing list

Re: [Gluster-users] Corrupted File readable via FUSE?

2019-02-04 Thread Amar Tumballi Suryanarayan
Hi David, I guess https://review.gluster.org/#/c/glusterfs/+/21996/ helps to fix the issue. I will leave it to Raghavendra Bhat to reconfirm. Regards, Amar On Fri, Feb 1, 2019 at 8:45 PM David Spisla wrote: > Hello Gluster Community, > I have got a 4 Node Cluster with a Replica 4 Volume, so