Hi there,
well, it seems the heal has finally finished. Couldn't see/find any
related log message; is there such a message in a specific log file?
But i see the same behaviour when the last heal finished: all CPU
cores are consumed by brick processes; not only by the formerly failed
bricksdd1, bu
As you mentioned after creating the /var/run/gluster directory I got a
statedump file in there.
As a workaround I have now removed the quota for this specific directory and as
it is a production server I can currently not "play" with it by adding the
quota back and having the same problem as it
Thanks for letting us know. Sanoj, can you take a look at this?
Thanks.
Nithya
On 14 August 2018 at 13:58, mabi wrote:
> As you mentioned after creating the /var/run/gluster directory I got a
> statedump file in there.
>
> As a workaround I have now removed the quota for this specific directory
Hi,
Currently master branch is lock for fixing failures in the regression
test suite [1].
As a result we are not releasing the next minor update for the 3.12 branch,
which falls on the 10th of every month.
The next 3.12 update would be around the 10th of September, 2018.
Apologies for the d
Hi,
That's actually pretty bad, we've all been waiting for the memory leak
patch for a while now, an extra month is a bit of a nightmare for us.
Is there no way to get 3.12.12 with that patch sooner, at least ? I'm
getting a bit tired of rebooting virtual machines by hand everyday to
avoid the OO
+1
Considering that master is no longer locked, it would be nice if a
release can be made sooner. Amar sent a missing back port [1] which
also fixes a mem leak issue on the client side. This needs to go in too.
Regards,
Ravi
[1] https://review.gluster.org/#/c/glusterfs/+/20723/
On 08/14/201
Thanks for the info!
I cannot see any logs in the mount log besides one line every time it
rotates
[2018-08-13 06:25:02.246187] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
0-glusterfs: No change in volfile,continuing
But I did find in the glfsheal-gv1.log of the volumes some kind of
server
I agree as well. This is a bug that is impacting users.
On 14 August 2018 at 16:30, Ravishankar N wrote:
> +1
>
> Considering that master is no longer locked, it would be nice if a release
> can be made sooner. Amar sent a missing back port [1] which also fixes a
> mem leak issue on the client
Hi,
The RCA for the memory leak had a mistake. It is yet to be identified.
The patch mentioned doesn't fix it.
Its on the protocol client. If some one can take a look at it, it would be nice.
On Tue, Aug 14, 2018 at 5:49 PM Nithya Balachandran wrote:
>
> I agree as well. This is a bug that is im
On Fri, 2018-08-10 at 09:39 -0400, Kaleb S. KEITHLEY wrote:
> On 08/10/2018 09:23 AM, Karli Sjöberg wrote:
> > On Fri, 2018-08-10 at 21:23 +0800, Pui Edylie wrote:
> > > Hi Karli,
> > >
> > > Storhaug works with glusterfs 4.1.2 and latest nfs-ganesha.
> > >
> > > I just installed them last weeken
Bad news: the process blocked happened again this time with another directory
of another user which is NOT over his quota but which also has quota enabled.
The symptoms on the Linux side are the same:
[Tue Aug 14 15:30:33 2018] INFO: task php5-fpm:14773 blocked for more than 120
seconds.
[Tue A
Hi Karli,
I'm not 100% sure this is related, but when I set up my ZFS NFS HA per
https://github.com/ewwhite/zfs-ha/wiki I was not able to get the failover
to work with NFS v4 but only with NFS v3.
>From the client point of view, it really looked like with NFS v4 there is
an open file handle and t
On Tue, Aug 14, 2018 at 7:23 PM, mabi wrote:
> Bad news: the process blocked happened again this time with another
> directory of another user which is NOT over his quota but which also has
> quota enabled.
>
> The symptoms on the Linux side are the same:
>
> [Tue Aug 14 15:30:33 2018] INFO: task
Den 15 aug. 2018 07:43 skrev Pui Edylie :
Hi Karli,
I think Alex is right in regards with the NFS version and state.Yeah, I'm setting up the tests now, I'll report back once it's done!
I am only using NFSv3 and the failover is working per expectation.
In m
Hi Karli,
I think Alex is right in regards with the NFS version and state.
I am only using NFSv3 and the failover is working per expectation.
In my use case, I have 3 nodes with ESXI 6.7 as OS and setup 1x gluster
VM on each of the ESXI host using its local datastore.
Once I have formed the
15 matches
Mail list logo