As an update on this thread - I was able to work around the issue.
I discovered that nearly all of the problematic files were coming from
one directory. I deleted that directory from the new servers, and
eventually geo-replication completed to the backup servers and was
staying in sync.
Once I
The ACL issue could be related to the destination , but it also could be a 'red
herring'.
Have you checked if both master and slave nodes are in time sync ?
My guess is that you are not willing to run a full sync ,as described here:
Thanks - I found the file path from the GFID - but I don't see any weird
xattr's:
[root@storage01 ~]# mkdir /mnt/storage2-gfid
[root@storage01 ~]# mount -t glusterfs -o aux-gfid-mount
10.0.231.91:/storage /mnt/storage2-gfid
[root@storage01 ~]# getfattr -n trusted.glusterfs.pathinfo -e text
I have updated to Gluster 7.8 and re-enabled the open-behind option, but
still getting the same errors...
* dict set of key for set-ctime-mdata failed
* gfid different on the target file on pcic-backup-readdir-ahead-1
* remote operation failed [No such file or directory]
Any suggestions?
Further to this - After rebuilding the slave volume with the xattr=sa
option and starting the destroying and restarting the geo-replication
sync I am still getting "extended attribute not supported by the backend
storage" errors:
[root@storage01 storage_10.0.231.81_pcic-backup]# tail
Dear Matthew,
from my current experience with gluster geo-replication and since this
is a key in your backup-procedure
(this is how it seems to me), I would come up with a new one, just to be
sure.
Regards,
Felix
On 05/10/2020 22:28, Matthew Benstead wrote:
Hmm... Looks like I forgot to
Hmm... Looks like I forgot to set the xattr's to sa - I left them as
default.
[root@pcic-backup01 ~]# zfs get xattr pcic-backup01-zpool
NAME PROPERTY VALUE SOURCE
pcic-backup01-zpool xattr on default
[root@pcic-backup02 ~]# zfs get xattr pcic-backup02-zpool
NAME
Dear Matthew,
this is our configuration:
zfs get all mypool
mypool xattr sa local
mypool acltype posixacl local
Something more to consider?
Regards,
Felix
On 05/10/2020 21:11, Matthew Benstead wrote:
Thanks Felix - looking through
Thanks Felix - looking through some more of the logs I may have found
the reason...
From
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/mnt-data-storage_a-storage.log
[2020-10-05 18:13:35.736838] E [fuse-bridge.c:4288:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not
Dear Matthew,
can you provide more information regarding to the geo-replication brick
logs.
These files area also located in:
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/
Usually, these log files are more precise to figure out the root cause
of the error.
Hello,
I'm looking for some help with a GeoReplication Error in my Gluster
7/CentOS 7 setup. Replication progress has basically stopped, and the
status of the replication keeps switching.
The gsyncd log has errors like "Operation not permitted", "incomplete
sync", etc... help? I'm not sure how
11 matches
Mail list logo