One status row should show Active and other should show Passive. Please
provide logs from gfs1 and gfs2
nodes(/var/log/glusterfs/geo-replication/gvol/*.log)
Also please let us know,
1. Gluster version and OS
2. output of `ps aux | grep gsyncd` from Master nodes and Slave nodes
regards
Aravinda
On 11/17/2015 02:09 AM, Deepak Ravi wrote:
Hi all
I'm working on a Geo-replication setup that I'm having issues with.
Situation :
- In the east region of AWS, I Created a replicated volume between 2
nodes, lets call this volume *gvol*
-
*In the west region of AWS, I Created another replicated volume between 2
nodes, lets call this volume xvol *
- Geo replication was created and started successfully
-
[root@gfs1 mnt]# gluster volume geo-replication gvol xfs1::xvol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------
gfs1 gvol /data/brick/gvol root xfs1::xvol
N/A Passive N/A N/A
gfs2 gvol /data/brick/gvol root xfs1::xvol
N/A Passive N/A N/A
The data on nodes(gfs1 and gfs2) was not being replicated to xfs1 at all. I
tried restarting the services and it still didn't help. Looking at the log
files didn't help me much because I didn't know what I should be looking
for.
Can someone point me in the right direction?
Thanks
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users