Dear list,
Today I received a "Directory not empty" error while trying to remove a
directory from the FUSE mount of a distribute–replicate volume. Looking in
the directory I found a few files with question marks:
-?? ? ? ? ?? ._Log.out
I checked the volume heal
Thank you for replying!
> Okay so 0-cm_shared-replicate-1 means these 3 bricks:
>
> Brick4: 172.23.0.6:/data/brick_cm_shared
> Brick5: 172.23.0.7:/data/brick_cm_shared
> Brick6: 172.23.0.8:/data/brick_cm_shared
The above is correct.
> Were there any pending self-heals for this volume? Is it
On 16/09/19 7:34 pm, Erik Jacobson wrote:
Example errors:
ex1
[2019-09-06 18:26:42.665050] E [MSGID: 108008]
[afr-read-txn.c:123:afr_read_txn_refresh_done] 0-cm_shared-replicate-1: Failing
ACCESS on gfid ee3f5646-9368-4151-92a3-5b8e7db1fbf9: split-brain observed.
[Input/output error]
Okay
Hello all. I'm new to the list but not to gluster.
We are using gluster to service NFS boot on a top500 cluster. It is a
Distributed-Replicate volume 3x9.
We are having a problem when one server in a subvolume goes down, we get
random missing files and split-brain errors in the nfs.log file.
We
Hi,
unfortunatelly I have a directory in split-brain state.
If I do a
gluster volume heal split-brain source-brick
gluster:/glusterfs/brick
I get a
socket not connected.
How can I manually heal that directory?
Best regards and thanks in advance
Bene
--
forumZFD
Entschieden für
Next I ran a test and your find worksI am wondering if I can simply
delete this GFID?
8><-
[root@glusterp2 fb]# find /bricks/brick1/gv0/ -samefile
/bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-bc9fd3d1763f
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root@glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root@glusterp2 fb]# ls -al
total 3130892
drwx--. 2 root root 64 May 22 13:01 .
drwx--. 4 root root 24 May 8 14:27 ..
-rw---. 1
I tried this already.
8><---
[root@glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root@glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find -samefile //
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is,
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==
root@salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem
Hi,
I am having a problem with a split-brain issue that does not seem to match up
with documentation on how to solve it.
gluster volume heal VMData2 info gives:
Brick found2.ssd.org:/data/brick6/data
Hey,
>From the getfattr output you have provided, the directory is clearly not in
split brain.
If all the bricks are being blamed by others then it is called split brain.
In your case only client-13 that is Brick-14 in the volume info output had
a pending entry heal on the directory.
That is the
Hello,
I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The
effect consist of a specific file in this splitted directory to randomly be
unavailable on some clients.
I have gathered all the informations on this gist:
On 03/20/2017 06:31 PM, Bernhard Dübi wrote:
Hi Ravi,
thank you very much for looking into this
The gluster volumes are used by CommVault Simpana to store backup
data. Nothing/Nobody should access the underlying infrastructure.
while looking at the xattrs of the files, I noticed that the
On 07/15/2016 08:23 PM, Rob Janney wrote:
Currently we are on 3.6: glusterfs 3.6.9 built on Mar 2 2016 18:21:17
That version does not have the real-time gfapi based version of the
`info split-brain` command and it just prints prior event history from
the selfheal daemon's memory. You can
Currently we are on 3.6: glusterfs 3.6.9 built on Mar 2 2016 18:21:17
Yes, the file gfid file still was present at
/.glusterfs/ab/da/abdab36b-b90b-4201-98fe-7a36059da81d
On Thu, Jul 14, 2016 at 7:51 PM, Ravishankar N
wrote:
>
>
> On 07/15/2016 02:24 AM, Rob Janney
On 07/15/2016 02:24 AM, Rob Janney wrote:
I have a 2 node cluster that reports a single file as split brained ..
but the file itself doesn't exist on either node.
What is the version of gluster you are running?
I did not find this scenario in the split brain docs, but based on
other
I have a 2 node cluster that reports a single file as split brained .. but
the file itself doesn't exist on either node. I did not find this scenario
in the split brain docs, but based on other scenarios I deleted the gfid
file and then ran a full heal, however; it still complains about split
13.07.2016 07:44, Pranith Kumar Karampuri пишет:
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov > wrote:
12.07.2016 17:38, Pranith Kumar Karampuri пишет:
Did you wait for heals to complete before upgrading second node?
no...
So
13.07.2016 00:20, Darrell Budic пишет:
FYI, it’s my experience that “yum upgrade” will stop the running
glistered (and possibly the running glusterfsds)
This is main point- looks like glusterfsds are not stopped by upgrade,
not sure, though...
Thank you!
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov wrote:
>
>
> 12.07.2016 17:38, Pranith Kumar Karampuri пишет:
>
> Did you wait for heals to complete before upgrading second node?
>
>
> no...
>
So basically if you have operations in progress on the mount, you should
wait for
FYI, it’s my experience that “yum upgrade” will stop the running glistered (and
possibly the running glusterfsds) during it’s installation of new gluster
components. I’ve also noticed it starts them back up again during the process.
Ie, yesterday I upgraded a system to 3.7.13:
systemctl stop
12.07.2016 17:38, Pranith Kumar Karampuri пишет:
Did you wait for heals to complete before upgrading second node?
no...
On Tue, Jul 12, 2016 at 3:08 PM, Dmitry Melekhov > wrote:
12.07.2016 13:31, Pranith Kumar Karampuri пишет:
On
Did you wait for heals to complete before upgrading second node?
On Tue, Jul 12, 2016 at 3:08 PM, Dmitry Melekhov wrote:
> 12.07.2016 13:31, Pranith Kumar Karampuri пишет:
>
>
>
> On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
12.07.2016 13:31, Pranith Kumar Karampuri пишет:
On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov > wrote:
11.07.2016 12:47, Gandalf Corvotempesta пишет:
2016-07-11 9:54 GMT+02:00 Dmitry Melekhov
On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov wrote:
> 11.07.2016 12:47, Gandalf Corvotempesta пишет:
>
>> 2016-07-11 9:54 GMT+02:00 Dmitry Melekhov :
>>
>>> We just got split-brain during update to 3.7.13 ;-)
>>>
>> This is an interesting point.
>> Could you
11.07.2016 12:47, Gandalf Corvotempesta пишет:
2016-07-11 9:54 GMT+02:00 Dmitry Melekhov :
We just got split-brain during update to 3.7.13 ;-)
This is an interesting point.
Could you please tell me which replica count did you set ?
3
With replica "3" split brain should not
2016-07-11 9:54 GMT+02:00 Dmitry Melekhov :
> We just got split-brain during update to 3.7.13 ;-)
This is an interesting point.
Could you please tell me which replica count did you set ?
With replica "3" split brain should not occurs, right ?
I'm planning a new cluster and I
On 07/11/2016 01:54 PM, Dmitry Melekhov wrote:
11.07.2016 12:18, Ravishankar N пишет:
On 07/11/2016 01:24 PM, Dmitry Melekhov wrote:
Hello!
We just got split-brain during update to 3.7.13 ;-)
This was our first time we hit this, and we found that it is not
easy to follow and change xattrs
11.07.2016 12:18, Ravishankar N пишет:
On 07/11/2016 01:24 PM, Dmitry Melekhov wrote:
Hello!
We just got split-brain during update to 3.7.13 ;-)
This was our first time we hit this, and we found that it is not easy
to follow and change xattrs metadata.
So, could you tell me, are there any
On 07/11/2016 01:24 PM, Dmitry Melekhov wrote:
Hello!
We just got split-brain during update to 3.7.13 ;-)
This was our first time we hit this, and we found that it is not easy
to follow and change xattrs metadata.
So, could you tell me, are there any plans to simplify this?
Let's say, add
Hello!
We just got split-brain during update to 3.7.13 ;-)
This was our first time we hit this, and we found that it is not easy to
follow and change xattrs metadata.
So, could you tell me, are there any plans to simplify this?
Let's say, add tool to choose preferred copy (brick) to heal
You need to follow the steps to understand and resolve split-brains:
https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md
https://github.com/gluster/glusterfs-specs/blob/master/done/Features/heal-info-and-split-brain-resolution.md
Feel free to let us know if you have
On 02/10/2016 02:03 PM, Pranith Kumar Karampuri wrote:
You need to follow the steps to understand and resolve split-brains:
https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md
Hello, I think that I have a split-brain issue in a replicated-striped
gluster cluster with 4 bricks: brick1+brick2 - brick3+brick4
In both brick3 and brick4 I'm getting these kind of messages:
[2016-02-08 15:01:49.720343] E
[afr-self-heal-entry.c:246:afr_selfheal_detect_gfid_and_type_mismatch]
On 02/08/2016 09:05 PM, Rodolfo Gonzalez wrote:
Hello, I think that I have a split-brain issue in a replicated-striped
gluster cluster with 4 bricks: brick1+brick2 - brick3+brick4
In both brick3 and brick4 I'm getting these kind of messages:
[2016-02-08 15:01:49.720343] E
any thoughts?
Miloš
> 30. 11. 2015 v 23:45, Miloš Kozák :
>
> I am using Gluster for a few years without any significant issue (after I
> tweaked configuration for v3.5). My configuration is as follows:
>
> network.remote-dio: enable
> cluster.eager-lock: enable
>
Hi,
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't it be
increased
and decreased, but never removed?
Could that be the reason why GlusterFS is confused?
What could be the reason for gfid mismatches?
Regards
Andreas
Brick1 getfattr -d -m . -e hex config.ior
# file:
On 08/13/2015 07:42 PM, Andreas Hollaus wrote:
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't
it be increased and decreased, but never removed?
If one brick of a replica 2 setup is down, and files are written to, the
dirty xattr is never set on the brick that is up.
On 08/13/2015 07:30 PM, Lakshmi Anusha wrote:
Hello,
We managed to collect below command outputs:
Brick1 getfattr -d -m . -e hex
/opt/lvmdir/c2/brick/logfiles/security/EVENT_LOG.xml
getfattr: Removing leading '/' from absolute path names
# file:
On 08/11/2015 03:28 PM, Lakshmi Anusha wrote:
From the extended attributes, an entry split-brain seems to be
appeared. Since gfid is different for the original file and its replica.
Can you please let us know why the split-brain files are not shown in
gluster volume heal vol info split-brain
Hello,
We are using glusterfs version 3.7.2.
There are few i/o errors reported during our testing and i/o errors are due
to split brain files.
We tried to find the split brain files with gluster volume heal vol info
split-brain.
But strangely the command shows that the number of split brain files
.
Cheers,
Peter
From: Ravishankar N [mailto:ravishan...@redhat.com]
Sent: Wednesday, 5 August 2015 12:03 PM
To: Peter Becker; Gluster-users@gluster.org
Subject: Re: [Gluster-users] Split brain after rebooting half of a two-node
cluster
On 08/05/2015 07:01 AM, Peter Becker wrote:
Hi Ravi
as well, although I might be making that one up.
Glad it worked out for you Peter.
Cheers,
Peter
*From:*Ravishankar N [mailto:ravishan...@redhat.com]
*Sent:* Wednesday, 5 August 2015 12:03 PM
*To:* Peter Becker; Gluster-users@gluster.org
*Subject:* Re: [Gluster-users] Split brain after rebooting
On 08/05/2015 04:49 AM, Peter Becker wrote:
qmaster@srvamqpy01:~$ gluster --version
glusterfs 3.2.5 built on Jan 31 2012 07:39:59
FWIW, this is a rather old release. Can you see if the issue is
recurring with glusterfs 3.7?
-Ravi
___
: [Gluster-users] Split brain after rebooting half of a two-node
cluster
On 08/05/2015 04:49 AM, Peter Becker wrote:
qmaster@srvamqpy01:~$ gluster --version
glusterfs 3.2.5 built on Jan 31 2012 07:39:59
FWIW, this is a rather old release. Can you see if the issue is recurring with
glusterfs 3.7
Thanks, Ravi - I was not aware of that.
I'll update and try again.
Peter
From: Ravishankar N [mailto:ravishan...@redhat.com]
Sent: Wednesday, 5 August 2015 12:03 PM
To: Peter Becker; Gluster-users@gluster.org
Subject: Re: [Gluster-users] Split brain after rebooting half of a two-node
Hello,
We are trying to run a pair of ActiveMQ nodes on top of glusterfs, using the
approach described in
http://activemq.apache.org/shared-file-system-master-slave.html
This seemed to work at first, but if I start rebooting machines while under
load I seem to quickly get into this problem:
that.
Given that it’s a bit involved: are there other venues we could try
first?
Cheers,
Peter
*From:*Ravishankar N [mailto:ravishan...@redhat.com]
*Sent:* Wednesday, 5 August 2015 11:12 AM
*To:* Peter Becker; Gluster-users@gluster.org
*Subject:* Re: [Gluster-users] Split brain after
Hi,
I have a wired situation. Is this as it should be? or what is going on
# gluster --version
glusterfs 3.6.2 built on Jan 22 2015 12:58:11
There are 8 files reported in split-brain (1024 times), but they seem to be
identical.
# gluster volume heal glu_rhevtst_dr2_data_01 info split-brain|
On 07/16/2015 01:28 AM, Игорь Бирюлин wrote:
I have studied information on page:
https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md
and cannot solve split-brain by this instruction.
I have tested it on gluster 3.6 and it doesn't work, only on
I have studied information on page:
https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md
and cannot solve split-brain by this instruction.
I have tested it on gluster 3.6 and it doesn't work, only on gluster 3.7.
I try to use on gluster 3.7.2.
I
On 07/14/2015 11:19 AM, Roman wrote:
Hi,
played with glusterfs tonight and tried to use recommended XFS for
gluster.. first try was pretty bad and all of my VM-s hanged (XFS
wants allocsize=64k to create qcow2 files, which i didn't know about
and tried to create VM on XFS without this config
never mind. I do not have enough time to debug why basic commands of
gluster do not work on production server. It was enough of tonight's system
freeze due to not documented XFS settings MUST have to run glusterfs with
XFS. I'll keep to EXT4 better. Anyway XFS for bricks did not solve my
previous
Hello ,
split brain happened a few hours before, would you define which copy is
the newest ??
# gluster volume heal 1KVM12_P3 info
Brick 1kvm1:/STORAGES/g1r5p3/GFS/
/7e5ca629-5e97-4220-a6b2-b93242e8f314/dom_md/ids - Is in split-brain
Number of entries: 1
Brick 1kvm2:/STORAGES/g1r5p3/GFS/
On 06/02/2015 09:10 AM, Carl L Hoffman wrote:
Hello - I was wondering if someone could please help me.
I've just setup Gluster 3.6 on two Ubuntu 14.04 hosts. Gluster is setup to
replicate two volumes (prod-volume, dev-volume) between the two hosts.
Replication is working fine. The
Hello - I was wondering if someone could please help me.
I've just setup Gluster 3.6 on two Ubuntu 14.04 hosts. Gluster is setup to
replicate two volumes (prod-volume, dev-volume) between the two hosts.
Replication is working fine. The glustershd.log shows:
[2015-06-02 03:28:04.495162] E
Hi there,
I have a split-brain situation on one of my volumes. The part that I'm not
clear on is that it would appear to be on the root of the volume.
[2015-04-13 22:07:35.729798] E
[afr-self-heal-common.c:2262:afr_self_heal_completion_cbk]
0-www-conf-replicate-0: background meta-data
This is fixed in http://review.gluster.org/9459 and should be available
in 3.7.
As a workaround, you can restart the selfheal daemon process (gluster v
start volname force). This should clear its history.
Thanks,
Ravi
On 02/20/2015 01:43 PM, Félix de Lelelis wrote:
Hi,
I generated a
Hi,
Sorry for the delay, it took a long time to reproduce. But currently we
have the same issue again. This was happened afrer reseting all nodes.
Quorum is enabled. Logs and details below.
gluster volume heal vm_storage_volume info split-brain
Gathering list of split brain entries on volume
On 12/25/2014 08:05 PM, Alexey wrote:
Hi all,
We are using glusterfs setup with a quorum turned on and the
configuration as the follows:
Nodes: 3
Type: Replicate
Number of Bricks: 1 x 3 = 3
cluster.quorum-type: fixed
cluster.quorum-count: 2
cluster.data-self-heal-algorithm: diff
Hi all,
We are using glusterfs setup with a quorum turned on and the configuration
as the follows:
Nodes: 3
Type: Replicate
Number of Bricks: 1 x 3 = 3
cluster.quorum-type: fixed
cluster.quorum-count: 2
cluster.data-self-heal-algorithm: diff
cluster.server-quorum-ratio: 51%
glusterfs version:
Ok, no problem. The issue is very rare, even with our setup - we have seen it
only once on one site even though we have been in production for several months
now. For now, we can live with that IMO.
And, thanks again.
Anirban___
Gluster-users
It is possible, yes, because these are actually a kind of log files. I suppose,
like other logging frameworks these files an remain open for a considerable
period, and then get renamed to support log rotate semantics.
That said, I might need to check with the team that actually manages the
: * Pranith Kumar Karampuri pkara...@redhat.com;
*To: * Anirban Ghoshal chalcogen_eg_oxy...@yahoo.com;
gluster-users@gluster.org;
*Subject: * Re: [Gluster-users] Split-brain seen with [0 0] pending
matrix and io-cache page errors
*Sent: * Sun, Oct 19, 2014 5:42:24 AM
On 10/18/2014 04:36 PM, Anirban
I see. Thanks a tonne for the thorough explanation! :) I can see that our setup
would be vulnerable here because the logger on one server is not generally
aware of the state of the replica on the other server. So, it is possible that
the log files may have been renamed before heal had a chance
.
Pranith
Thanks,
Anirban
*From: * Pranith Kumar Karampuri pkara...@redhat.com;
*To: * Anirban Ghoshal chalcogen_eg_oxy...@yahoo.com;
gluster-users@gluster.org;
*Subject: * Re: [Gluster-users] Split-brain seen with [0 0
Hi,
Yes, they do, and considerably. I#39;d forgotten to mention that on my last
email. Their mtimes, however, as far as i could tell on separate servers,
seemed to coincide.
Thanks,
Anirban___
Gluster-users mailing list
Gluster-users@gluster.org
;
*To: * Anirban Ghoshal chalcogen_eg_oxy...@yahoo.com;
gluster-users@gluster.org gluster-users@gluster.org;
*Subject: * Re: [Gluster-users] Split-brain seen with [0 0] pending
matrix and io-cache page errors
*Sent: * Sat, Oct 18, 2014 12:26:08 AM
hi,
Could you see if the size of the file
Hi everyone,
I have this really confusing split-brain here that's bothering me. I am running
glusterfs 3.4.2 over linux 2.6.34. I have a replica 2 volume 'testvol' that is
It seems I cannot read/stat/edit the file in question, and `gluster volume heal
testvol info split-brain` shows nothing.
hi,
Could you see if the size of the file mismatches?
Pranith
On 10/18/2014 04:20 AM, Anirban Ghoshal wrote:
Hi everyone,
I have this really confusing split-brain here that's bothering me. I
am running glusterfs 3.4.2 over linux 2.6.34. I have a replica 2
volume 'testvol' that is It
I was able to run another set of tests this week and I was able to
reproduce the issue again. Going by the extended attributes, I think i ran
into the same issue I saw earlier..
Do you think i need to open up a bug report?
Brick 1:
trusted.afr.PL2-client-0=0x
On 09/19/2014 09:58 PM, Ramesh Natarajan wrote:
I was able to run another set of tests this week and I was able to
reproduce the issue again. Going by the extended attributes, I think i
ran into the same issue I saw earlier..
Do you think i need to open up a bug report?
hi Ramesh,
I
I don't understand why there's such a complicated process to recover when I
can just look at both files, decide which one I need and delete another one.
On Thu, Sep 11, 2014 at 7:56 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
On 09/11/2014 09:29 AM, Ilya Ivanov wrote:
Right...
On 09/11/2014 11:37 AM, Ilya Ivanov wrote:
I don't understand why there's such a complicated process to recover
when I can just look at both files, decide which one I need and delete
another one.
If the file needs to be deleted the whole file needs to be copied which
is fine for small files
Makes some sense. Yes, I meant make a backup and delete, rather than just
delete.
If I may suggest, putting that debug link somewhere more visible would be
be good, too. I wouldn't find without your help.
Thank you for the assistance.
On Thu, Sep 11, 2014 at 9:14 AM, Pranith Kumar Karampuri
On 09/11/2014 01:13 PM, Ilya Ivanov wrote:
Makes some sense. Yes, I meant make a backup and delete, rather than
just delete.
If I may suggest, putting that debug link somewhere more visible would
be be good, too. I wouldn't find without your help.
Justin, where shall we put the doc?
On 11/09/2014, at 9:44 AM, Pranith Kumar Karampuri wrote:
On 09/11/2014 01:13 PM, Ilya Ivanov wrote:
Makes some sense. Yes, I meant make a backup and delete, rather than just
delete.
If I may suggest, putting that debug link somewhere more visible would be be
good, too. I wouldn't find
Any insight?
On Tue, Sep 9, 2014 at 8:35 AM, Ilya Ivanov bearw...@gmail.com wrote:
What's a gfid split-brain and how is it different from normal
split-brain?
I accessed the file with stat, but heal info still shows Number of
entries: 1
[root@gluster1 gluster]# getfattr -d -m. -e hex
On 09/11/2014 12:16 AM, Ilya Ivanov wrote:
Any insight?
Was the other file's gfid d3def9e1-c6d0-4b7d-a322-b5019305182e?
Could you check if this file exists in brick/.glusterfs/d3/de/
When a file is deleted this file also needs to be deleted if there are
no more hardlinks to the file
Pranith
Right... I deleted it and now all appears to be fine.
Still, could you please elaborate on gfid split-brain?
On Thu, Sep 11, 2014 at 5:32 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
On 09/11/2014 12:16 AM, Ilya Ivanov wrote:
Any insight?
Was the other file's gfid
On 09/11/2014 09:29 AM, Ilya Ivanov wrote:
Right... I deleted it and now all appears to be fine.
Still, could you please elaborate on gfid split-brain?
Could you go through
https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md
Let us know if you would like something to
On 09/09/2014 11:35 AM, Ilya Ivanov wrote:
Ahh, thank you, now I get it. I deleted it on one node and it
replicated to another one. Now I get the following output:
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e
What's a gfid split-brain and how is it different from normal split-brain?
I accessed the file with stat, but heal info still shows Number of
entries: 1
[root@gluster1 gluster]# getfattr -d -m. -e hex gv01/123
# getfattr -d -m. -e hex gv01/123
# file: gv01/123
Ahh, thank you, now I get it. I deleted it on one node and it replicated to
another one. Now I get the following output:
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e
Number of entries: 1
Brick
On 09/09/2014 01:54 AM, Ilya Ivanov wrote:
Hello.
I've Gluster 3.5.2 on Centos 6. A primitive replicated volume, as
describe here
https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers.
I tried to simulate split-brain by
What is the glusterfs version where you ran into this issue?
Pranith
On 09/05/2014 11:52 PM, Ramesh Natarajan wrote:
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I
have client and server quorum turned on. I rebooted one of the 3
bricks. When it came back up, the client
3.5.2 installed via rpm from official gluster repo, running on amazon ami.
Thanks
Ramesh
On Sep 6, 2014, at 7:29 AM, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
What is the glusterfs version where you ran into this issue?
Pranith
On 09/05/2014 11:52 PM, Ramesh Natarajan wrote:
On 09/06/2014 06:17 PM, Ramesh Natarajan wrote:
3.5.2 installed via rpm from official gluster repo, running on amazon ami.
Including the client i.e. mount bits?
Pranith
Thanks
Ramesh
On Sep 6, 2014, at 7:29 AM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
Client is also running the same version. It is running RHEL 6.5.
The mount command used is
mount -t glusterfs -o
On 09/06/2014 04:53 AM, Jeff Darcy wrote:
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I have
client and server quorum turned on. I rebooted one of the 3 bricks. When it
came back up, the client started throwing error messages that one of the
files went into split brain.
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I have
client and server quorum turned on. I rebooted one of the 3 bricks. When it
came back up, the client started throwing error messages that one of the
files went into split brain.
This is a good example of how split brain
Thanks Jeff for the detailed explanation. You mentioned delayed changelog may
have prevented this issue. Can you please tell me how to enable it?
Thanks
Ramesh
On Sep 5, 2014, at 6:23 PM, Jeff Darcy jda...@redhat.com wrote:
I have a replicate glusterfs setup on 3 Bricks ( replicate = 3 ). I
Hi,
I seem to have developed a split brain situation on a directory... I
think. The directory is a hierarchy that shouldn't exist, as it's
actually a hard link to the volume itself in the brick. Here's what I'm
seeing:
- Brick mount point (XFS): /mnt/gfs/wingu1/sdb1
- Volume:
looks weird,but I don't think it's a strictly defined split brain.can you
paste the output of getattr command on the directory you mentioned?
-原始邮件-
From: Alan Orth
Sent: Monday, July 28, 2014 6:09 PM
To: gluster-users
Subject: [Gluster-users] Split brain on directory
?
-原始邮件- From: Alan Orth
Sent: Monday, July 28, 2014 6:09 PM
To: gluster-users
Subject: [Gluster-users] Split brain on directory?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo
please run:
getfattr -de hex -m . the_directory
-原始邮件-
From: Alan Orth
Sent: Monday, July 28, 2014 7:42 PM
To: david.zhang...@gmail.com ; gluster-users
Subject: Re: [Gluster-users] Split brain on directory?
___
Gluster-users mailing list
...@gmail.com wrote:
looks weird,but I don't think it's a strictly defined split brain.can
you
paste the output of getattr command on the directory you mentioned?
-原始邮件-
From: Alan Orth
Sent: Monday, July 28, 2014 6:09 PM
To: gluster-users
Subject: [Gluster-users] Split brain on directory
Hi all,
Running glusterfs-3.4.2-1.el6.x86_6 on centos6.5
Due to some smart people screw up my network connection on the nodes for don't
know how long. I found that I have my GlusterFS volume in split-brain. I
googled and found different way to clean this. I need some extra help on this.
#
1 - 100 of 129 matches
Mail list logo