Re: [Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-30 Thread a . schwibbe
Meanwhile I tried reset-brick on one of the failing arbiters on node2, but with same results. The behaviour is reproducible, arbiter stays empty. node0: 192.168.0.40 node1: 192.168.0.41 node3: 192.168.0.80 volume info: Volume Name: gv0 Type: Distributed-Replicate Volume ID:

[Gluster-users] Geo-replication adding new master node

2021-05-30 Thread David Cunningham
Hello, We have a GlusterFS configuration with mirrored nodes on the master side geo-replicating to mirrored nodes on the secondary side. When geo-replication is initially created it seems to automatically add all the mirrored nodes on the master side as geo-replication master nodes, which is

[Gluster-users] Conv distr-repl 2 to repl 3 arb 1 now 2 of 6 arb bricks won't get healed

2021-05-30 Thread a . schwibbe
I am seeking help here after looking for solutions on the web for my distributed-replicated volume. My volume is operated since v3.10 and I upgraded through to 7.9, replaced nodes, replaced bricks without a problem. I love it. Finally I wanted to extend my 6x2 distributed replicated volume with