Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread a . schwibbe
I can't find anything suspicious in the brick logs other than authetication refused to clients trying to mount a dir that is not existing on the arb_n, because the self-heal isn't working. I tried to add another node and replace-brick a faulty arbiter, however this new arbiter sees the same

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread Strahil Nikolov
Hi, I think that the best way is to go through the logs on the affected arbiter brick (maybe even temporarily increase the log level). What is the output of: find /var/brick/arb_0/brick -not -user 36 -printfind /var/brick/arb_0/brick -not group 36 -print Maybe there are some files/dirs that are

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread Strahil Nikolov
For the arb_0 I seeonly 8 clients , while there should be 12 clients: Brick : 192.168.0.40:/var/bricks/0/brickClients connected : 12 Brick : 192.168.0.41:/var/bricks/0/brickClients connected : 12 Brick : 192.168.0.80:/var/bricks/arb_0/brickClients connected : 8 Can you try to reconnect them. The

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread a . schwibbe
Ok, will do. working arbiter: ls -ln /var/bricks/arb_0/ >>> drwxr-xr-x 13 33 33 146 Mai 29 22:38 brick ls- lna /var/bricks/arb_0/brick >>> drw--- 262 0 0 8192 Mai 29 22:38 .glusterfs + all data-brick dirs ... affected arbiter: ls -ln /var/bricks/arb_0/ >>> drwxr-xr-x 3 0 0 24 Mai 30

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread Strahil Nikolov
I would avoid shrinking the volume. An oVirt user reported issues after volume shrinking. Did you try to format the arbiter brick and 'replace-brick' ? Best Regards,Strahil Nikolov I can't find anything suspicious in the brick logs other than authetication refused to clients trying to mount

Re: [Gluster-users] Geo-replication adding new master node

2021-05-31 Thread David Cunningham
Hi Aravinda, Thank you very much - we will give that a try. On Mon, 31 May 2021 at 20:29, Aravinda VK wrote: > Hi David, > > On 31-May-2021, at 10:37 AM, David Cunningham > wrote: > > Hello, > > We have a GlusterFS configuration with mirrored nodes on the master side > geo-replicating to

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread Andreas Schwibbe
Hm, I tried format and reset-brick on node2 - no success. I tried new brick on new node3 and replace-brick - no success as the new arbiter is created wrongly and self-heal does not work. I also restarted all nodes turn by turn without any improvement. If shrinking the volume is not

Re: [Gluster-users] Issues with glustershd with release 8.4 and 9.1

2021-05-31 Thread Marco Fais
Srijan no problem at all -- thanks for your help. If you need any additional information please let me know. Regards, Marco On Thu, 27 May 2021 at 18:39, Srijan Sivakumar wrote: > Hi Marco, > > Thank you for opening the issue. I'll check the log contents and get back > to you. > > On Thu,

Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-31 Thread a . schwibbe
Thanks Strahil, unfortunately I cannot connect as the mount is denied as in mount.log provided. IPs > n.n.n..100 are clients and simply cannot mount the volume. When killing the arb pids on node2 new clients can mount the volume. When bringing them up again I experience the same problem. I

Re: [Gluster-users] Geo-replication adding new master node

2021-05-31 Thread Aravinda VK
Hi David, > On 31-May-2021, at 10:37 AM, David Cunningham > wrote: > > Hello, > > We have a GlusterFS configuration with mirrored nodes on the master side > geo-replicating to mirrored nodes on the secondary side. > > When geo-replication is initially created it seems to automatically add