https://docs.gluster.org/en/v3/Troubleshooting/resolving-splitbrain/
Hopefully the link above will help you fix it.
Diego
On Wed, May 9, 2018, 21:53 Thing wrote:
> [trying to read,
>
>
> I cant understand what is wrong?
>
> root@glusterp1 gv0]# gluster volume heal gv0
[trying to read,
I cant understand what is wrong?
root@glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
- Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
- Is in split-brain
Status: Connected
Number of entries: 1
also I have this "split brain"?
[root@glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
- Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
- Is in split-brain
/glusterp1/images/centos-server-001.qcow2
[root@glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
--
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing wrote:
> Hi,
>
> I have 3
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get