Hi,
If you run this below command on master

gluster vol geo-rep <mastervol> <slave-vol> config
slave-gluster-command-dir <gluster-binary-location>

on slave run "which gluster" to know gluster-binary-location on slave

It will make the same entry in gsyncd.conf file please recheck and
confirm both entries are same and also can you confirm that both
master and slave have same gluster version.

- Sunny


On Mon, Jul 23, 2018 at 5:50 PM Maarten van Baarsel
<[email protected]> wrote:
>
> On 23/07/18 13:48, Sunny Kumar wrote:
>
> Hi Sunny,
>
> thanks again for replying!
>
>
> >> Can I test something else? Is the command normally run in a jail?
>
> > Please share gsyncd.log form master.
>
> [2018-07-23 12:18:19.773240] I [monitor(monitor):158:monitor] Monitor: 
> starting gsyncd worker   brick=/var/lib/gluster  slave_node=gluster-4.glstr
> [2018-07-23 12:18:19.832611] I [gsyncd(agent /var/lib/gluster):297:main] 
> <top>: Using session config file       
> path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
> [2018-07-23 12:18:19.832674] I [gsyncd(worker /var/lib/gluster):297:main] 
> <top>: Using session config file      
> path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
> [2018-07-23 12:18:19.834259] I [changelogagent(agent 
> /var/lib/gluster):72:__init__] ChangelogAgent: Agent listining...
> [2018-07-23 12:18:19.848596] I [resource(worker 
> /var/lib/gluster):1345:connect_remote] SSH: Initializing SSH connection 
> between master and slave...
> [2018-07-23 12:18:20.387191] E [syncdutils(worker 
> /var/lib/gluster):301:log_raise_exception] <top>: connection to peer is broken
> [2018-07-23 12:18:20.387592] E [syncdutils(worker 
> /var/lib/gluster):747:errlog] Popen: command returned error   cmd=ssh 
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
> /tmp/gsyncd-aux-ssh-nN8_GE/2648484453eaadd9d3042ceba9bafa6a.sock 
> [email protected] /nonexistent/gsyncd slave gl0 
> [email protected]::glbackup --master-node gluster-3.glstr 
> --master-node-id 9650e965-bf4f-4544-a42b-f4d540d23a1f --master-brick 
> /var/lib/gluster --local-node gluster-4.glstr --local-node-id 
> 736f6431-2f9c-4115-9790-68f9a88d99a7 --slave-timeout 120 --slave-log-level 
> INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/sbin/    
>   error=1
> [2018-07-23 12:18:20.388887] I [repce(agent 
> /var/lib/gluster):80:service_loop] RepceServer: terminating on reaching EOF.
> [2018-07-23 12:18:21.389723] I [monitor(monitor):266:monitor] Monitor: worker 
> died in startup phase     brick=/var/lib/gluster
>
> repeated again and again.
>
> Maarten.
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to