On March 11, 2020 10:17:05 PM GMT+02:00, "Etem Bayoğlu"
wrote:
>Hi Strahil,
>
>Thank you for your response. when I tail logs on both master and slave
>I
>get this:
>
>on slave, from
>/var/log/glusterfs/geo-replication-slaves//mnt-XXX.log
>file:
>
>[2020-03-11 19:53:32.721509] E
I'm struggling to keep bricks online, where they keep crashing almost
immediately after forcing the volume online. Any suggestions for
troubleshooting?
Here is an excerpt from the logs from one of the bricks going offline.
[2020-03-12 03:46:03.589778] W [socket.c:774:__socket_rwv]
Hi Strahil,
Thank you for your response. when I tail logs on both master and slave I
get this:
on slave, from
/var/log/glusterfs/geo-replication-slaves//mnt-XXX.log file:
[2020-03-11 19:53:32.721509] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(-->
On March 11, 2020 4:27:58 PM GMT+02:00, Pat Haley wrote:
>
>Hi,
>
>I was able to successfully reset cluster.min-free-disk. That only made
>
>the "No space left on device" problem intermittent instead of
>constant.
>I then look at the brick log files again and noticed "No space ..."
>error
Hi,
I was able to successfully reset cluster.min-free-disk. That only made
the "No space left on device" problem intermittent instead of constant.
I then look at the brick log files again and noticed "No space ..."
error recorded for files that I knew nobody was accessing. gluster
volume
Hi deepu,
Can you please check all bricks are up on both master and slave side.
It will also be helpful if you share geo-rep mount logs form slave.
/sunny
On Wed, Mar 11, 2020 at 6:19 AM deepu srinivasan wrote:
>
> Hi Sunny
> Please update on this issue.
>
> On Tue, Mar 10, 2020 at 11:51 AM
On March 11, 2020 10:09:27 AM GMT+02:00, "Etem Bayoğlu"
wrote:
>Hello community,
>
>I've set up a glusterfs geo-replication node for disaster recovery. I
>manage about 10TB media data on a gluster volume and I want to sync all
>data to remote location over WAN. So, I created a slave node volume
Hello community,
I've set up a glusterfs geo-replication node for disaster recovery. I
manage about 10TB media data on a gluster volume and I want to sync all
data to remote location over WAN. So, I created a slave node volume on
disaster recovery center on remote location and I've started