Hi I just know someone must have seen this before Geo-replication on 3.6.1, epel, Centos 6.6
Popen: ssh> [2014-12-08 22:57:31.433468] E [socket.c:2267:socket_connect_finish] 0-glusterfs: connection to 127.0.0.1:24007 failed (Connection refused) Popen: ssh> [2014-12-08 22:57:31.433484] T [socket.c:723:__socket_disconnect] 0-glusterfs: disconnecting 0x22c6550, state=0 gen=0 sock=6 Popen: ssh> [2014-12-08 22:57:31.433497] D [socket.c:704:__socket_shutdown] 0-glusterfs: shutdown() returned -1. Transport endpoint is not connected Popen: ssh> [2014-12-08 22:57:31.433506] D [socket.c:2344:socket_event_handler] 0-transport: disconnecting now Popen: ssh> [2014-12-08 22:57:31.433541] T [cli.c:270:cli_rpc_notify] 0-glusterfs: got RPC_CLNT_DISCONNECT Is my google-fu deserting me in my hour of need? BR Jan From: Gong XiaoHui <[email protected]> Date: Monday 08 December 2014 at 7:45 AM To: "'[email protected]'" <[email protected]> Subject: [Gluster-devel] some issues about geo-replication and gfapi Hi I have two questions with glusterfs: 1.When I use libgfapi to write a file to a volume, which is configured as a geo-replication masterVol. It cannot work well, the geo status is faulty, following is the log: The glusterfs version is 3.6.1 [2014-12-08 13:12:11.708616] I [master(/data_xfs/geo2-master):1330:crawl] _GMaster: finished hybrid crawl syncing, stime: (1418010314, 0) [2014-12-08 13:12:11.709706] I [master(/data_xfs/geo2-master):480:crawlwrap] _GMaster: primary master with volume id d220647c-5730-4cef-a89b-932470c914d2 ... [2014-12-08 13:12:11.735719] I [master(/data_xfs/geo2-master):491:crawlwrap] _GMaster: crawl interval: 3 seconds [2014-12-08 13:12:11.811722] I [master(/data_xfs/geo2-master):1182:crawl] _GMaster: slave's time: (1418010314, 0) [2014-12-08 13:12:11.840826] E [repce(/data_xfs/geo2-master):207:__call__] RepceClient: call 8318:139656072095488:1418015531.84 (entry_ops) failed on peer with OSError [2014-12-08 13:12:11.840990] E [syncdutils(/data_xfs/geo2-master):270:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 164, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 643, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1344, in service_loop g2.crawlwrap() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 529, in crawlwrap self.crawl(no_stime_update=no_stime_update) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1194, in crawl self.process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 946, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 910, in process_change self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__ raise res OSError: [Errno 12] Cannot allocate memory [2014-12-08 13:12:11.842178] I [syncdutils(/data_xfs/geo2-master):214:finalize] <top>: exiting. [2014-12-08 13:12:11.843421] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. [2014-12-08 13:12:11.843682] I [syncdutils(agent):214:finalize] <top>: exiting. [2014-12-08 13:12:12.103255] I [monitor(monitor):222:monitor] Monitor: worker(/data_xfs/geo2-master) died in startup phase 2.Another question, I cannot configure a geo-replication in 3.6.1 with the method in 3.4.1. Any response is appreciate. 产品及时服务:请在Wind资讯终端上按“F1”键或致电客服专线400-820-Wind(9463) --------------------------------------------------------------- 宫晓辉 技术X01部 上海万得信息技术股份有限公司(Wind资讯) Shanghai Wind Information Co., Ltd. 上海市浦东新区福山路33号建工大厦9楼 200120 9/F Jian Gong Mansion,33 Fushan Road, Pudong New Area, Shanghai, P.R.C. 200120 Tel: (0086 21)6888 2280*8310 Fax: (0086 21)6888 2281 Email: [email protected] http://www.wind.com.cn _______________________________________________ Gluster-devel mailing list [email protected] http://supercolony.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-devel mailing list [email protected] http://supercolony.gluster.org/mailman/listinfo/gluster-devel
