Hi!
Bumping an old thread, because there’s now activity around this bug. The github
issue is https://github.com/gluster/glusterfs/issues/2492
We just hit this bug after an update from GlusterFS 7.x to 9.4. We did not see
this in our test environment, so we did the update, but the bug is still
Srijan
no problem at all -- thanks for your help. If you need any additional
information please let me know.
Regards,
Marco
On Thu, 27 May 2021 at 18:39, Srijan Sivakumar wrote:
> Hi Marco,
>
> Thank you for opening the issue. I'll check the log contents and get back
> to you.
>
> On Thu,
Hi Marco,
Thank you for opening the issue. I'll check the log contents and get back
to you.
On Thu, May 27, 2021 at 10:50 PM Marco Fais wrote:
> Srijan
>
> thanks a million -- I have opened the issue as requested here:
>
> https://github.com/gluster/glusterfs/issues/2492
>
> I have attached
Srijan
thanks a million -- I have opened the issue as requested here:
https://github.com/gluster/glusterfs/issues/2492
I have attached the glusterd.log and glustershd.log files, but please let
me know if there is any other test I should do or logs I should provide.
Thanks,
Marco
On Wed, 26
Hi Marco,
If possible, let's open an issue in github and track this from there. I am
checking the previous mails in the chain to see if I can infer something
about the situation. It would be helpful if we could analyze this with the
help of log files. Especially glusterd.log and glustershd.log.
Ravi,
thanks a million.
@Mohit, @Srijan please let me know if you need any additional information.
Thanks,
Marco
On Tue, 25 May 2021 at 17:28, Ravishankar N wrote:
> Hi Marco,
> I haven't had any luck yet. Adding Mohit and Srijan who work in glusterd
> in case they have some inputs.
> -Ravi
Hi Marco,
I haven't had any luck yet. Adding Mohit and Srijan who work in glusterd
in case they have some inputs.
-Ravi
On Tue, May 25, 2021 at 9:31 PM Marco Fais wrote:
> Hi Ravi
>
> just wondering if you have any further thoughts on this -- unfortunately
> it is something still very much
Hi Ravi
just wondering if you have any further thoughts on this -- unfortunately it
is something still very much affecting us at the moment.
I am trying to understand how to troubleshoot it further but haven't been
able to make much progress...
Thanks,
Marco
On Thu, 20 May 2021 at 19:04, Marco
Just to complete...
from the FUSE mount log on server 2 I see the same errors as in
glustershd.log on node 1:
[2021-05-20 17:58:34.157971 +] I [MSGID: 114020] [client.c:2319:notify]
0-VM_Storage_1-client-11: parent translators are ready, attempting connect
on transport []
[2021-05-20
HI Ravi,
thanks again for your help.
Here is the output of "cat graphs/active/VM_Storage_1-client-11/private"
from the same node where glustershd is complaining:
[xlator.protocol.client.VM_Storage_1-client-11.priv]
fd.0.remote_fd = 1
-- = --
granted-posix-lock[0] = owner =
Hi Marco,
On Wed, May 19, 2021 at 8:02 PM Marco Fais wrote:
> Hi Ravi,
>
> thanks a million for your reply.
>
> I have replicated the issue in my test cluster by bringing one of the
> nodes down, and then up again.
> The glustershd process in the restarted node is now complaining about
>
Hi Strahil
thanks for your reply. Just to better explain my setup, while I am using
the same nodes for oVirt and Gluster I do manage the two independently (so
Gluster is not managed by oVirt).
See below for the output you have requested:
*gluster pool list*
UUID
Hi Ravi,
thanks a million for your reply.
I have replicated the issue in my test cluster by bringing one of the nodes
down, and then up again.
The glustershd process in the restarted node is now complaining about
connectivity to two bricks in one of my volumes:
---
[2021-05-19 14:05:14.462133
I think that we also have to take a look of the quorum settings.Usually oVirt
adds hosts as part of the TSP even if they got no bricks involved in the volume.
Can you provide the output of:
'gluster pool list''gluster volume info all'
Best Regards,Strahil Nikolov
On Wed, May 19, 2021 at
On Mon, May 17, 2021 at 4:22 PM Marco Fais wrote:
> Hi,
>
> I am having significant issues with glustershd with releases 8.4 and 9.1.
>
> My oVirt clusters are using gluster storage backends, and were running
> fine with Gluster 7.x (shipped with earlier versions of oVirt Node 4.4.x).
> Recently
Hi,
I am having significant issues with glustershd with releases 8.4 and 9.1.
My oVirt clusters are using gluster storage backends, and were running fine
with Gluster 7.x (shipped with earlier versions of oVirt Node 4.4.x).
Recently the oVirt project moved to Gluster 8.4 for the nodes, and hence
16 matches
Mail list logo