Aravinda,
I figured it out. The problem was that I was using the public IPs to create
the gluster cluster which started giving the transport node issue. I found
a work around by using the ec2-DNS private for peering and public for
geo-replication which worked like a charm. Sorry, if this doesn't m
Alright, here you go. Slaves xfs1 and xfs2:
*[root@xfs1 ~]*# cat
/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2
d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log | less
[2015-11-17 15:30:32.082984] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epo
Looks like I/O error on slave while doing keep_alive. We can get more
useful info for the same from Slave log files.
In Slave nodes look for errors in
/var/log/glusterfs/geo-replication-slaves/*.log and
/var/log/glusterfs/geo-replication-slaves/*.gluster.log
regards
Aravinda
On 11/17/2015 1
I also noted that the second master gfs2 alternates between passive/faulty.
Not sure if this matters but, I have changed the /etc/hosts file to change
127.0.0.1 to gfs1 and so on because my node would not be in peer cluster
state.
Gluster version : 3.7.6-1
OS: RHEL 7
[root@gfs1 ~]# cat
/var/log/
One status row should show Active and other should show Passive. Please
provide logs from gfs1 and gfs2
nodes(/var/log/glusterfs/geo-replication/gvol/*.log)
Also please let us know,
1. Gluster version and OS
2. output of `ps aux | grep gsyncd` from Master nodes and Slave nodes
regards
Aravinda
Hi all
I'm working on a Geo-replication setup that I'm having issues with.
Situation :
- In the east region of AWS, I Created a replicated volume between 2
nodes, lets call this volume *gvol*
-
*In the west region of AWS, I Created another replicated volume between 2
nodes, lets call