Hi Krishna,

Please check for this file existance
'/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py' at slave.

- Sunny
On Wed, Oct 24, 2018 at 4:36 PM Krishna Verma <kve...@cadence.com> wrote:
>
>
>
>
>
>
>
> Hi Everyone,
>
>
>
> I have created a 4*4 distributed gluster but when I am starting the start the 
> session its get failed with below errors.
>
>
>
> [2018-10-24 10:02:03.857861] I [gsyncdstatus(monitor):245:set_worker_status] 
> GeorepStatus: Worker Status Change status=Initializing...
>
> [2018-10-24 10:02:03.858133] I [monitor(monitor):155:monitor] Monitor: 
> starting gsyncd worker   brick=/gfs1/brick1/gv1  slave_node=sj-gluster02
>
> [2018-10-24 10:02:03.954746] I [gsyncd(agent /gfs1/brick1/gv1):297:main] 
> <top>: Using session config file       
> path=/var/lib/glusterd/geo-replication/gv1_sj-gluster01_gv1/gsyncd.conf
>
> [2018-10-24 10:02:03.956724] I [changelogagent(agent 
> /gfs1/brick1/gv1):72:__init__] ChangelogAgent: Agent listining...
>
> [2018-10-24 10:02:03.958110] I [gsyncd(worker /gfs1/brick1/gv1):297:main] 
> <top>: Using session config file      
> path=/var/lib/glusterd/geo-replication/gv1_sj-gluster01_gv1/gsyncd.conf
>
> [2018-10-24 10:02:03.975778] I [resource(worker 
> /gfs1/brick1/gv1):1377:connect_remote] SSH: Initializing SSH connection 
> between master and slave...
>
> [2018-10-24 10:02:07.413379] E [syncdutils(worker 
> /gfs1/brick1/gv1):305:log_raise_exception] <top>: connection to peer is broken
>
> [2018-10-24 10:02:07.414144] E [syncdutils(worker 
> /gfs1/brick1/gv1):801:errlog] Popen: command returned error   cmd=ssh 
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
> /tmp/gsyncd-aux-ssh-OE_W1C/cf9a66dce686717c4a5ef9a7c3a7f8be.sock sj-gluster01 
> /nonexistent/gsyncd slave gv1 sj-gluster01::gv1 --master-node noida-gluster01 
> --master-node-id 08925454-9fea-4b24-8f82-9d7ad917b870 --master-brick 
> /gfs1/brick1/gv1 --local-node sj-gluster02 --local-node-id 
> f592c041-dcae-493c-b5a0-31e376a5be34 --slave-timeout 120 --slave-log-level 
> INFO --slave-gluster-log-level INFO --slave-gluster-command-dir 
> /usr/local/sbin/  error=2
>
> [2018-10-24 10:02:07.414386] E [syncdutils(worker 
> /gfs1/brick1/gv1):805:logerr] Popen: ssh> /usr/bin/python2: can't open file 
> '/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py': [Errno 2] No such file 
> or directory
>
> [2018-10-24 10:02:07.422688] I [repce(agent 
> /gfs1/brick1/gv1):80:service_loop] RepceServer: terminating on reaching EOF.
>
> [2018-10-24 10:02:07.422842] I [monitor(monitor):266:monitor] Monitor: worker 
> died before establishing connection       brick=/gfs1/brick1/gv1
>
> [2018-10-24 10:02:07.435054] I [gsyncdstatus(monitor):245:set_worker_status] 
> GeorepStatus: Worker Status Change status=Faulty
>
>
>
>
>
> MASTER NODE          MASTER VOL    MASTER BRICK        SLAVE USER    SLAVE    
>             SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED
>
> --------------------------------------------------------------------------------------------------------------------------------------------
>
> noida-gluster01      gv1           /gfs1/brick1/gv1    root          
> sj-gluster01::gv1    N/A           Faulty    N/A             N/A
>
> noida-gluster02      gv1           /gfs1/brick1/gv1    root          
> sj-gluster01::gv1    N/A           Faulty    N/A             N/A
>
> gluster-poc-noida    gv1           /gfs1/brick1/gv1    root          
> sj-gluster01::gv1    N/A           Faulty    N/A             N/A
>
> noi-poc-gluster      gv1           /gfs1/brick1/gv1    root          
> sj-gluster01::gv1    N/A           Faulty    N/A             N/A
>
>
>
>
>
> Could someone please help?
>
>
>
> /Krishna
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to