On 01/26/2011 07:25 PM, David Lloyd wrote:
Well, I did this and it seems to have worked. I was just guessing really,
didn't have any documentation or advice from anyone in the know.
I just reset the attributes on the root directory for each brick that was
not all zeroes.
I found it easier to
Yes, it seemed really dangerous to me too. But with the lack of
documentation, and lack of response from gluster (and the data is still on
the old system too), I thought I'd give it a shot.
Thanks for the explanation. The split-brain problem seems to come up fairly
regularly, but I've not found
David,
The problem what you are facing is something we are already investigating.
We still haven't root-caused it yet, but from what we have seen this happens
only on / and only for metadata changelog. This shows up as just annoying
logs but it should not affect your functionality.
Avati
On
These errors are appearing in the file /var/log/glusterfs/mountpoint.log
[2011-01-26 11:02:10.342349] I [afr-common.c:672:afr_lookup_done]
pfs-ro1-replicate-5: split brain detected during lookup of /.
[2011-01-26 11:02:10.342366] I [afr-common.c:716:afr_lookup_done]
pfs-ro1-replicate-5:
We started getting the same problem at almost exactly the same time.
get one of these messages every time I access the root of the mounted volume
(and nowhere else, I think).
This is also 3.1.1
I'm just starting to look in to it, I'll let you know if I get anywhere.
David
On 26 January 2011
I read on another thread about checking the getfattr output for each brick,
but it tailed off before any explanation of what to do with this information
We have 8 bricks in the volume. Config is:
g1:~ # gluster volume info glustervol1
Volume Name: glustervol1
Type: Distributed-Replicate
Status: