We can check in brick backend.

ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2

regards
Aravinda

On Thursday 15 September 2016 09:12 PM, ML mail wrote:
So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the 
following two entries:

355235   19 drwxr-xr-x   3 www-data www-data        3 Sep 15 12:48 
./data/username/files_encryption/keys/files/Dir/File\ 2016.xlsx
355234   56 -rw-r--r--   2 www-data www-data    44308 Sep 15 12:48 
./data/username/files/Dir/Ein\ und\ Ausgaben\ File\ 2016.xlsx


As you can see, there is one file and one directory named 'File 2016.xlsx'. 
This is actually the web application Nextcloud when it uses encryption: the 
file is the encrypted file and the directory named like the file contains some 
encryption keys to encrypt/decrypt that specific file.

Now the next thing would be to find out if geo-rep failed on the file or the 
directory named 'File 2016.xlsx'. Any ideas how I can check that? and what else 
would you need to debug this issue?

Regards
ML





On Thursday, September 15, 2016 9:12 AM, Aravinda <avish...@redhat.com> wrote:
Thanks for the logs from Master. Below error says directory not empty
but names looks like they are files. Please confirm from Master Volume
that path is directory or file.

[2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk]
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File
2016.xlsx.ocTransferId1333449197.part ->
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1
(Directory not empty)


regards
Aravinda


On Wednesday 14 September 2016 12:49 PM, ML mail wrote:
As requested you will find below the last few 100 lines of the geo-rep log file 
from the master node.


FILE: 
/var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log
====

[2016-09-13 19:39:48.686060] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 
------------------------------------------------------------
[2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] <top>: 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:02.554076] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up xsync change detection mode
[2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.555332] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changelog change detection mode
[2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.556711] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changeloghistory change detection mode
[2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:04.692945] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795604
[2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795604
[2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:05.832189] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 28167:140561674450688:1473795605.78 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:05.832458] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:05.834218] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:40:05.836263] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:40:05.836611] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:40:06.558499] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:40:16.715217] I [monitor(monitor):266:monitor] Monitor: 
------------------------------------------------------------
[2016-09-13 19:40:16.715517] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:40:16.812082] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:16.812350] I [gsyncd(/data/cloud-pro/brick):710:main_i] <top>: 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:40:19.594152] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up xsync change detection mode
[2016-09-13 19:40:19.594552] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:19.595347] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changelog change detection mode
[2016-09-13 19:40:19.595597] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:19.596605] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changeloghistory change detection mode
[2016-09-13 19:40:19.596873] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:21.624726] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:21.625017] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795621
[2016-09-13 19:40:21.657767] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:21.676127] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:21.702104] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795621
[2016-09-13 19:40:22.705148] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:22.771497] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 3931:139796592768768:1473795622.72 (entry_ops) failed on peer 
with OSError
[2016-09-13 19:40:22.771775] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:22.773555] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:40:22.775707] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:40:22.776058] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:40:23.598599] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:40:33.757551] I [monitor(monitor):266:monitor] Monitor: 
------------------------------------------------------------
[2016-09-13 19:40:33.757883] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:40:33.853430] I [gsyncd(/data/cloud-pro/brick):710:main_i] <top>: 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:40:33.854017] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:36.634135] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up xsync change detection mode
[2016-09-13 19:40:36.634555] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:36.635369] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changelog change detection mode
[2016-09-13 19:40:36.635637] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:36.636730] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changeloghistory change detection mode
[2016-09-13 19:40:36.637002] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:38.664729] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:38.665039] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795638
[2016-09-13 19:40:38.698820] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:38.707082] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:38.726772] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795638
[2016-09-13 19:40:39.729528] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:39.794873] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 11631:140250873493248:1473795639.75 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:39.795084] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:39.796519] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:40:39.798130] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:40:39.798346] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:40:40.638521] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:40:50.795886] I [monitor(monitor):266:monitor] Monitor: 
------------------------------------------------------------
[2016-09-13 19:40:50.796202] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:40:50.897680] I [gsyncd(/data/cloud-pro/brick):710:main_i] <top>: 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:40:50.897952] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:53.674088] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up xsync change detection mode
[2016-09-13 19:40:53.674504] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:53.675293] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changelog change detection mode
[2016-09-13 19:40:53.675581] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:53.676620] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changeloghistory change detection mode
[2016-09-13 19:40:53.676872] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:55.705611] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:55.705899] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795655
[2016-09-13 19:40:55.739295] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:55.746989] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:55.766885] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795655
[2016-09-13 19:40:56.769995] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:56.843411] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 19400:139711711643392:1473795656.79 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:56.843692] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:56.845477] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:40:56.847600] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:40:56.847938] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:40:57.678493] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:41:07.840516] I [monitor(monitor):266:monitor] Monitor: 
------------------------------------------------------------
[2016-09-13 19:41:07.840862] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:41:07.934971] I [gsyncd(/data/cloud-pro/brick):710:main_i] <top>: 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:41:07.935297] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:41:10.714098] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up xsync change detection mode
[2016-09-13 19:41:10.714518] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:41:10.715337] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changelog change detection mode
[2016-09-13 19:41:10.715604] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:41:10.716655] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
<top>: setting up changeloghistory change detection mode
[2016-09-13 19:41:10.716928] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:41:12.747504] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:41:12.747813] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795672
[2016-09-13 19:41:12.781532] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:41:12.796451] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:41:12.816945] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795672
[2016-09-13 19:41:13.819705] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:41:13.890348] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 27102:140491138873088:1473795673.84 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:41:13.890587] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:41:13.892209] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
<top>: exiting.
[2016-09-13 19:41:13.894195] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:41:13.894532] I [syncdutils(agent):220:finalize] <top>: exiting.
[2016-09-13 19:41:14.718497] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase




On Wednesday, September 14, 2016 8:46 AM, Aravinda <avish...@redhat.com> wrote:
Thanks, but couldn't find one log file output. To get the log file path

gluster volume geo-replication <MASTER> <SLAVEHOST>::<SLAVEVOL> config
log_file

regards
Aravinda


On Wednesday 14 September 2016 12:02 PM, ML mail wrote:
Dear Aravinda,

As requested I have attached within this mail a file containing the last 
100-300 lines of the three files located in the requested directory on the 
master node.

Let me know if you did not receive the file, I am not sure it is possible to 
attach files on this mailing list.

Regards,
ML




On Wednesday, September 14, 2016 6:14 AM, Aravinda <avish...@redhat.com> wrote:
Please share the logs from Master node which is
Faulty(/var/log/glusterfs/geo-replication/<mastervol>_<slavehost>_<slavevol>/*.log)

regards
Aravinda


On Wednesday 14 September 2016 01:10 AM, ML mail wrote:
Hi,

I just discovered that one of my replicated glusterfs volumes is not being 
geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep 
slave node indicates an error with a directory which seems not to be empty. 
Below you will find the full log entry for this problem which gets repeated 
every 5 seconds.

I am using GlusterFS 3.7.12 on Debian 8 with FUSE mount on the clients.


How can I fix this issue?

Thanks in advance for your help

Regards
ML


Log File: 
/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log

[2016-09-13 19:30:52.098881] I [MSGID: 100030] [glusterfsd.c:2338:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.12 
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl 
--log-file=/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
 --volfile-server=localhost --volfile-id=cloud-pro-geo --client-pid=-1 
/tmp/gsyncd-aux-mount-X9XX0v)
[2016-09-13 19:30:52.109030] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2016-09-13 19:30:52.111565] I [graph.c:269:gf_add_cmdline_options] 
0-cloud-pro-geo-md-cache: adding option 'cache-posix-acl' for volume 
'cloud-pro-geo-md-cache' with value 'true'
[2016-09-13 19:30:52.113344] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 2
[2016-09-13 19:30:52.113710] I [MSGID: 114020] [client.c:2106:notify] 
0-cloud-pro-geo-client-0: parent translators are ready, attempting connect on 
transport
Final graph:
+------------------------------------------------------------------------------+
1: volume cloud-pro-geo-client-0
2:     type protocol/client
3:     option ping-timeout 42
4:     option remote-host gfs1geo.domain.tld
5:     option remote-subvolume /data/cloud-pro-geo/brick
6:     option transport-type socket
7:     option username 92671588-f829-4c03-a80f-6299b059452e
8:     option password e1d425d4-dfe7-477e-8dc1-3704c9c9df83
9:     option send-gids true
10: end-volume
11:
12: volume cloud-pro-geo-dht
13:     type cluster/distribute
14:     subvolumes cloud-pro-geo-client-0
15: end-volume
16:
17: volume cloud-pro-geo-write-behind
18:     type performance/write-behind
19:     subvolumes cloud-pro-geo-dht
20: end-volume
21:
22: volume cloud-pro-geo-read-ahead
23:     type performance/read-ahead
24:     subvolumes cloud-pro-geo-write-behind
25: end-volume
26:
27: volume cloud-pro-geo-readdir-ahead
28:     type performance/readdir-ahead
29:     subvolumes cloud-pro-geo-read-ahead
30: end-volume
31:
32: volume cloud-pro-geo-io-cache
33:     type performance/io-cache
34:     subvolumes cloud-pro-geo-readdir-ahead
35: end-volume
36:
37: volume cloud-pro-geo-quick-read
38:     type performance/quick-read
39:     subvolumes cloud-pro-geo-io-cache
40: end-volume
41:
42: volume cloud-pro-geo-open-behind
43:     type performance/open-behind
44:     subvolumes cloud-pro-geo-quick-read
45: end-volume
46:
47: volume cloud-pro-geo-md-cache
48:     type performance/md-cache
49:     option cache-posix-acl true
50:     subvolumes cloud-pro-geo-open-behind
51: end-volume
52:
53: volume cloud-pro-geo
54:     type debug/io-stats
55:     option log-level INFO
56:     option latency-measurement off
57:     option count-fop-hits off
58:     subvolumes cloud-pro-geo-md-cache
59: end-volume
60:
61: volume posix-acl-autoload
62:     type system/posix-acl
63:     subvolumes cloud-pro-geo
64: end-volume
65:
66: volume gfid-access-autoload
67:     type features/gfid-access
68:     subvolumes posix-acl-autoload
69: end-volume
70:
71: volume meta-autoload
72:     type meta
73:     subvolumes gfid-access-autoload
74: end-volume
75:
+------------------------------------------------------------------------------+
[2016-09-13 19:30:52.115096] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
0-cloud-pro-geo-client-0: changing port to 49154 (from 0)
[2016-09-13 19:30:52.121610] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-cloud-pro-geo-client-0: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2016-09-13 19:30:52.121911] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-cloud-pro-geo-client-0: 
Connected to cloud-pro-geo-client-0, attached to remote volume 
'/data/cloud-pro-geo/brick'.
[2016-09-13 19:30:52.121933] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-cloud-pro-geo-client-0: Server 
and Client lk-version numbers are not same, reopening the fds
[2016-09-13 19:30:52.128072] I [fuse-bridge.c:5172:fuse_graph_setup] 0-fuse: 
switched to graph 0
[2016-09-13 19:30:52.128139] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_version_cbk] 0-cloud-pro-geo-client-0: 
Server lk version = 1
[2016-09-13 19:30:52.128258] I [fuse-bridge.c:4083:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.22 kernel 7.23
[2016-09-13 19:30:57.474905] I [MSGID: 109066] [dht-rename.c:1568:dht_rename] 
0-cloud-pro-geo-dht: renaming /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 
2016.xlsx.ocTransferId1333449197.part 
(hash=cloud-pro-geo-client-0/cache=cloud-pro-geo-client-0) => 
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx 
(hash=cloud-pro-geo-client-0/cache=cloud-pro-geo-client-0)
[2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk] 0-glusterfs-fuse: 
25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 
2016.xlsx.ocTransferId1333449197.part -> 
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1 (Directory not 
empty)
[2016-09-13 19:30:57.506563] I [fuse-bridge.c:5013:fuse_thread_proc] 0-fuse: 
unmounting /tmp/gsyncd-aux-mount-X9XX0v
[2016-09-13 19:30:57.507269] W [glusterfsd.c:1251:cleanup_and_exit] 
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4) [0x7efc7498b0a4] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7efc75beb725] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x7efc75beb5a7] ) 0-: received signum 
(15), shutting down
[2016-09-13 19:30:57.507293] I [fuse-bridge.c:5720:fini] 0-fuse: Unmounting 
'/tmp/gsyncd-aux-mount-X9XX0v'.
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to