Meanwhile I tried reset-brick on one of the failing arbiters on node2, but with
same results. The behaviour is reproducible, arbiter stays empty.
node0: 192.168.0.40
node1: 192.168.0.41
node3: 192.168.0.80
volume info:
Volume Name: gv0
Type: Distributed-Replicate
Volume ID:
Hello,
We have a GlusterFS configuration with mirrored nodes on the master side
geo-replicating to mirrored nodes on the secondary side.
When geo-replication is initially created it seems to automatically add all
the mirrored nodes on the master side as geo-replication master nodes,
which is
I am seeking help here after looking for solutions on the web for my
distributed-replicated volume.
My volume is operated since v3.10 and I upgraded through to 7.9, replaced
nodes, replaced bricks without a problem. I love it.
Finally I wanted to extend my 6x2 distributed replicated volume with