Hi,

I am trying to setup the glusterfs georeplication following the steps from 
glusterfs 
documentation<https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/>.
 In my setup I have two nodes in the master side and single node in the slave 
side. Once i issue the command gluster volume geo-replication mastervolume 
slavenode::slavevolume start to start the geo-replication, the status is shown 
as below:

Master-node-2 gv0 /data/brick1/gv0 root ssh://Slave-node-1::gv0 N/A Faulty

If i watch the command (watch gluster volume geo-replication status), the state 
is changing between Faulty to Active and back to Faulty. The active status is 
shown as below:

Master-node-2 gv0 /data/brick1/gv0 root ssh://Slave-node-1::gv0 Slave-node-1 
Active Hybrid Crawl N/A

The logs are having the below error:

[2018-12-26 01:39:57.773381] I 
[gsyncdstatus(/data/brick1/gv0):245:set_worker_crawl_status] GeorepStatus: 
Crawl Status: Hybrid Crawl [2018-12-26 01:39:58.774770] I 
[master(/data/brick1/gv0):1368:crawl] _GMaster: processing xsync changelog 
/var/lib/misc/glusterfsd/gv0/ssh%3A%2F%2Froot%40192.168.115.215%3Agluster%3A%2F%2F127.0.0.1%3Agv0/430af6dc2d4f6e41e4786764428f83dd/xsync/XSYNC-CHANGELOG.1545788397
 [2018-12-26 01:39:59.249290] E [resource(/data/brick1/gv0):234:errlog] Popen: 
command "rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids 
--no-implied-dirs --xattrs --acls . -e ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 
22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-cexLuj/722b9cf6c96014bed67371d01a23d439.sock --compress 
root@Slave-node-1:/proc/6983/cwd" returned with 3 [2018-12-26 01:39:59.249869] 
I [syncdutils(/data/brick1/gv0):238:finalize] : exiting. [2018-12-26 
01:39:59.251211] I [repce(/data/brick1/gv0):92:service_loop] RepceServer: 
terminating on reaching EOF. [2018-12-26 01:39:59.251513] I 
[syncdutils(/data/brick1/gv0):238:finalize] : exiting. [2018-12-26 
01:39:59.685758] I [monitor(monitor):357:monitor] Monitor: 
worker(/data/brick1/gv0) died in startup phase [2018-12-26 01:39:59.688673] I 
[gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker Status: 
Faulty

When i checked the files in the volume, the files in the master are replicated 
to slave.

After this if I create a new file, the file is getting sycned to slave volume. 
But if i delete a file or change the content of the file, thats not getting 
reflected in the slave.

Any pointers are appreciated.
Regards
Abhilash
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to