So the origin of all your problems is basically because of glusterp3 node
in rejected state. You should be able to see an error log in glusterp1 &
glusterp2 about why this peer has been rejected during handshaking. If you
can point me to that log entry, probably that should give us a clue what
has
Maybe remove peer glusterp3 via "gluster peer detach" then re add it?
On 14 October 2016 at 12:16, Thing wrote:
> I seem to have a broken volume on glusterp3 which I odnt seem to be able to
> fix, how to please?
>
>
> [root@glusterp1 /]# ls -l /data1
> total 4
> -rw-r--r--. 2 root root 0
I seem to have a broken volume on glusterp3 which I odnt seem to be able to
fix, how to please?
[root@glusterp1 /]# ls -l /data1
total 4
-rw-r--r--. 2 root root 0 Dec 14 2015 file1
-rw-r--r--. 2 root root 0 Dec 14 2015 file2
-rw-r--r--. 2 root root 0 Dec 14 2015 file3
-rw-r--r--. 2 roo
So glusterp3 is in a reject state,
[root@glusterp1 /]# gluster peer status
Number of Peers: 2
Hostname: glusterp2.graywitch.co.nz
Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b
State: Peer in Cluster (Connected)
Other names:
glusterp2
Hostname: glusterp3.graywitch.co.nz
Uuid: 5d59b704-e42f-46c6-8c14
Hmm seem I have something rather inconsistent,
[root@glusterp1 /]# gluster volume create gv1 replica 3
glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1
volume create: gv1: failed: Host glusterp3 is not in 'Peer in Cluster' state
[root@glusterp1 /]# gluster peer probe glusterp3
pee
I deleted a gluster volume gv0 as I wanted to make it thin provisioned.
I have rebuilt "gv0" but I am getting a failure,
==
[root@glusterp1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 20G 3.9G 17G 20% /
devtmpfs