I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and 
this night one of my nodes (node1) had an out of memory for some unknown reason 
and as such the Linux OOM killer has killed the glusterd and glusterfs process. 
I restarted the glusterd process but now that node is in "Peer Rejected" state 
from the other nodes and from itself it rejects the two other nodes as you can 
see below from the output of "gluster peer status":
Number of Peers: 2
Hostname: arbiternode.domain.tld
Uuid: 60a03a81-ba92-4b84-90fe-7b6e35a10975
State: Peer Rejected (Connected)
Hostname: node2.domain.tld
Uuid: 4834dceb-4356-4efb-ad8d-8baba44b967c
State: Peer Rejected (Connected)
I also rebooted my node1 just in case but that did not help.
I read here http://www.spinics.net/lists/gluster-users/msg25803.html that the 
problem could have to do something with the volume info file, in my case I 
checked the file:
and they are the same on node1 and arbiternode but on node2 the order of the 
following volume parameters are different:
Could that be the reason why the peer is in rejected status? can I simply edit 
this file on node2 to re-order the parameters like on the other 2 nodes?
What else should I do to investigate the reason for this rejected peer state?
Thank you in advance for the help.
Gluster-users mailing list

Reply via email to