Hi all
Last Friday we had some network issues, the build in gluster heal mechanism
resolved almost all unsynced entries except 16 entries.
The command gluster volume heal public info displays:
Brick gfs06a-gs:/mnt/public/brick1
Status: Connected
Number of entries: 16
The
Hi,
After it happened the last time last Wednesday, I ran the command “gluster
volume heal full”, waited until this was succesfull and then stopped and
started the gluster daemon again. Yesterday (Thursday) it didn’t happened
again. Hopefully it’s fixed otherwise I will post the requested
Hi all,
About a month ago we deployed a Glusterfs 3.7.13 cluster with 6 nodes (3 x 2
replication). Suddenly since this week one node in the cluster started
reporting unsynced entries once a day. If I then run a gluster volume heal
full command the unsynced entries disappear until the next
Hi all,
In our test lab we have a gluster distributed/replicated setup with 2 nodes
(1x2), a volume is created which is accessible by a glusterclient which lives
on a third node. We bring one node down in the cluster and write some files to
the existing volume. After that we bring up the node
Hi all,
We have a 4 node distributed/replicated setup (2 x 2) with gluster version
3.6.4.
Yesterday one node went down due to a power failure, as expected everything
kept working well. But after we brought up the failed node again gluster
started, also as expected, its self-healing process.
> On 21 Sep 2015, at 12:44, Ravishankar N <ravishan...@redhat.com> wrote:
>
> s
>
> On 09/21/2015 03:48 PM, Davy Croonen wrote:
>> Hmmm, strange, I went through all my bricks with every time the same result:
>>
>> -bash: cd:
>> /mnt/public/br
gt; when the command is run.
> Is it possible to stop and start the volume again and see if that helps?
> (clients might lose access to it during that time though).
>
> -Ravi
>
>
> On 09/21/2015 06:19 PM, Davy Croonen wrote:
>> Please find attached the requested logs.
>
in there but not one with the referenced gfid.
Any further suggestions? If I can get rid of the message it’s ok.
Thanks in advance.
Kind regards
Davy
> On 21 Sep 2015, at 11:59, Ravishankar N <ravishan...@redhat.com> wrote:
>
>
>
> On 09/21/2015 03:09 PM, Davy Croonen wrote:
vishan...@redhat.com> wrote:
>
>
>
> On 09/21/2015 02:32 PM, Davy Croonen wrote:
>> Hi all
>>
>> For, at the moment a unknown reason, the command "gluster volume heal public
>> info” shows a lot of the following entries:
>>
>> /Doc1_LOUJA.ht
Hi all
For, at the moment a unknown reason, the command "gluster volume heal public
info” shows a lot of the following entries:
/Doc1_LOUJA.htm - Is in split-brain.
The part after the / differs but the gfid is always the same, I suppose this
gfid is referring to a directory. Now considering
regards
Davy
On 15 Sep 2015, at 17:04, Davy Croonen
<davy.croo...@smartbit.be<mailto:davy.croo...@smartbit.be>> wrote:
Hi all
After expanding our cluster we are facing failures while rebalancing. In my
opinion this doesn’t look good, so can anybody maybe explain how these failures
coul
Hi all
After expanding our cluster we are facing failures while rebalancing. In my
opinion this doesn’t look good, so can anybody maybe explain how these failures
could arise, how you can fix them or what the consequences can be?
$gluster volume rebalance public status
Ok, ignore this issue. User and group ownership was not set correctly on the
brick directories.
KR
Davy
On 14 Sep 2015, at 14:13, Davy Croonen
<davy.croo...@smartbit.be<mailto:davy.croo...@smartbit.be>> wrote:
Hi all
We have currently a production cluster with 2 nodes in a
Hi all
We have currently a production cluster with 2 nodes in a distributed replicated
setup with glusterfs version 3.6.4 which was updated from gluster version
3.5.x. I just expanded the cluster with 2 extra nodes with glusterfs version
3.6.4 installed but when running the rebalancing command
ut
somehow I managed to get the workaround today.
Could you do an explicit volume set on the existing cluster and then do
a peer probe? Let me know if that works.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1248895
Thanks,
Atin
On 09/11/2015 05:41 PM, Davy Croonen wrote:
Atin
Please see t
Hi all
We have a production cluster with 2 nodes (gfs01a and gfs01b) in a distributed
replicate setup with glusterfs 3.6.4. We want to expand the volume with 2 extra
nodes (gfs02a and gfs02b) because we are running out of diskspace. Therefor we
deployed 2 extra nodes with glusterfs 3.6.4.
located in (/var/lib/glusterd/vols/public/cksum).
Regards
Rafi KC
On 09/11/2015 03:24 PM, Davy Croonen wrote:
Hi all
We have a production cluster with 2 nodes (gfs01a and gfs01b) in a distributed
replicate setup with glusterfs 3.6.4. We want to expand the volume with 2 extra
nodes (gfs02a and
Atin
Please see the requested attachments.
KR
Davy
> On 11 Sep 2015, at 14:03, Atin Mukherjee <amukh...@redhat.com> wrote:
>
> Could you attach the contents of /var/lib/glusterd/vol//info
> file from both the nodes?
>
> ~Atin
>
> On 09/11/2015 04:50 PM, Davy Cro
Hi
We are currently running a gluster cluster with 2 nodes in replica 2. We mount
the distributed volume with the gluster fuse client and a volumebackup file for
failover.
The volumebackup file looks as follows:
volume node1
type protocol/client
option transport-type tcp
option
Thanks for your answer. This clarifies a lot.
KR
Davy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi Atin
Any news on this one?
KR
Davy
On 12 Aug 2015, at 16:41, Atin Mukherjee
atin.mukherje...@gmail.commailto:atin.mukherje...@gmail.com wrote:
Davy,
I will check this with Kaleb and get back to you.
-Atin
Sent from one plus one
On Aug 12, 2015 7:22 PM, Davy Croonen
davy.croo
in the etc-glusterfs-glusterd.vol.log file I’m looking
forward to hear from you.
Thanks in advance.
KR
Davy
On 11 Aug 2015, at 19:28, Atin Mukherjee
atin.mukherje...@gmail.commailto:atin.mukherje...@gmail.com wrote:
-Atin
Sent from one plus one
On Aug 11, 2015 7:54 PM, Davy Croonen
Mukherjee
amukh...@redhat.commailto:amukh...@redhat.com wrote:
Well, this looks like a bug even in 3.7 as well. I've posted a fix [1]
to address it.
[1] http://review.gluster.org/11898
Could you please raise a bug for this?
~Atin
On 08/12/2015 01:32 PM, Davy Croonen wrote:
Hi Atin
Thanks for your
Hi all
Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:
[2015-08-11 11:40:33.807940] E
[glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management: tune2fs
exited with non-zero exit status
[2015-08-11 11:40:33.807962] E
24 matches
Mail list logo