might be too late but sort of simple always working solution for such cases
is rebuilding .glusterfs
kill it and query attr for all files again, it will recreate .glusterfs on
all bricks
something like mentioned here
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
On
Hi,
What version of gluster are you using?
1. The afr xattrs on '/' indicate a meta-data split-brain. You can
resolve it using one of the policies listed in
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/
For example, "|gluster volume heal gv0 split-brain
Hi all
We've been running community supported gluster for a few years and now
we've bought support subscriptions for RHGS.
We currently have a 3 node system (2 replicas plus quorum) in production
hosting several volumes with a TB or so of data.
I've logged a support ticket requesting the best
I’ve now tested 3.12.11 on my centos 7.5 ovirt dev cluster, and all appears
good. Should be safe to move from -test to release for centos-gluster312
Thanks!
-Darrell
> From: Jiffin Tony Thottan
> Subject: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term
> Maintenance)
>
On 1 July 2018 at 22:37, Ashish Pandey wrote:
>
> The only problem at the moment is that arbiter brick offline. You should
> only bother about completion of maintenance of arbiter brick ASAP.
> Bring this brick UP, start FULL heal or index heal and the volume will be
> in healthy state.
>
Actually we just discovered that the heal info command was returning
different things when executed on the different nodes of our 3-replica
setup.
When we execute it on node2 we did not see the split brain reported "/" but
if I execute it on node0 and node1 I am seeing:
x@gfs-vm001:~$ sudo
I am trying to mount nfs to gluster volume and got mount.nfs failure.
Looking at nfs.log I am seeing these entries
Heal info does not show the mentioned gfid (
----0001 ) being in split-brain.
[2018-07-03 18:16:27.694953] W [MSGID: 112199]
This announcement is to publish changes to the upstream release cadence
from quarterly (every 3 months), to every 4 months and to have all
releases maintained (no more LTM/STM releases), based on the maintenance
and EOL schedules for the same.
Further, it is to start numbering releases with just
Dear Sanoj,thank you very much for your support.I just downloaded and executed the script you suggested.This is the full command I executed:./quota_fsck_new.py --full-logs --sub-dir /tier2/CSP/ans004/ /glusterIn attachment, you can find the logs generated by the script.What can I do now?Thank you
Hi Mauro,
This may be an issue with update of backend xattrs.
To RCA further and provide resolution could you provide me with the logs by
running the following fsck script.
https://review.gluster.org/#/c/19179/6/extras/quota/quota_fsck.py
Try running the script and revert with the logs
10 matches
Mail list logo