Hi Alessandro,
what you describe here reminds me of this issue:
http://www.spinics.net/lists/gluster-users/msg20144.html
And now that you mention it, the mess on our cluster could indeed have
been triggered by an aborted rebalance.
This is a very important clue, since apparently developers were
-c3b9f0cf9d72.vhd
Cheers,
Olav
On 21/02/15 01:37, Olav Peeters wrote:
It look even worse than I had feared.. :-(
This really is a crazy bug.
If I understand you correctly, the only sane pairing of the xattrs is
of the two 0-bit files, since this is the full list of bricks:
root@gluster01 ~]# gluster
versions
of the same file, not three, and on two bricks on different machines?
Cheers,
Olav
On 20/02/15 21:51, Joe Julian wrote:
On 02/20/2015 12:21 PM, Olav Peeters wrote:
Let's take one file (3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd) as an
example...
On the 3 nodes where all bricks
would mend
itself after all...
Thanks a million for your support in this darkest hour of my time as a
glusterfs user :-)
Cheers,
Olav
On 20/02/15 23:10, Joe Julian wrote:
On 02/20/2015 01:47 PM, Olav Peeters wrote:
Thanks Joe,
for the answers!
I was not clear enough about the set up
f -size 0 -perm 1000
-exec rm {} \;
not?
Thanks!
Cheers,
Olav
On 18/02/15 22:10, Olav Peeters wrote:
Thanks Tom and Joe,
for the fast response!
Before I started my upgrade I stopped all clients using the volume and
stopped all VM's with VHD on the volume, but I guess, and this may
Hi all,
I'm have this problem after upgrading from 3.5.3 to 3.6.2.
At the moment I am still waiting for a heal to finish (on a 31TB volume
with 42 bricks, replicated over three nodes).
Tom,
how did you remove the duplicates?
with 42 bricks I will not be able to do this manually..
Did a:
find
- Original Message -
Subject: Re: [Gluster-users] Hundreds of duplicate files
From: Olav Peeters opeet...@gmail.com
Date: 2/18/15 10:52 am
To: gluster-users@gluster.org, tben...@3vgeomatics.com
Hi all,
I'm have this problem after upgrading from 3.5.3 to 3.6.2
Hi,
two days ago is started a gluster volume remove-brick on a
Distributed-Replicate volume with 21 x 2 per node (3 in total).
I wanted to remove 4 bricks per node which are smaller than the others
(on each node I have 7 x 2TB disks and 4 x 500GB disks).
I am still on gluster 3.5.2. and I was
of space
on brick 6, 14, etc. ?
Anyone any idea?
Cheers,
Olav
On 21/01/15 13:18, Olav Peeters wrote:
Hi,
two days ago is started a gluster volume remove-brick on a
Distributed-Replicate volume with 21 x 2 per node (3 in total).
I wanted to remove 4 bricks per node which are smaller than
OK, thanks for the info!
Regards,
Olav
On 11/06/14 08:38, Pranith Kumar Karampuri wrote:
On 06/11/2014 12:03 PM, Olav Peeters wrote:
Thanks Pranith!
I see this at the end of the log files of one of the problem bricks
(the first two errors are repeated several times):
[2014-06-10 09:55
?
Would it help to do all fuse connections via NFS until after the fix?
Cheers,
Olav
On 11/06/14 08:44, Olav Peeters wrote:
OK, thanks for the info!
Regards,
Olav
On 11/06/14 08:38, Pranith Kumar Karampuri wrote:
On 06/11/2014 12:03 PM, Olav Peeters wrote:
Thanks Pranith!
I see this at the end
Hi,
I upgraded from glusterfs 3.4 to 3.5 about 8 days ago. Everything was
running fine until this morning. In a fuse mount we were having write
issues. Creating and deleting files became an issue all of a sudden
without any new changes to the cluster.
In /var/log/glusterfs/glustershd.log
Thanks Franco,
for the feed-back!
Did you stop gluster before updating? Or where there maybe no active
read/writes since it was a test system?
Cheers,
Olav
On 15/05/14 02:36, Franco Broi wrote:
On Wed, 2014-05-14 at 12:31 +0200, Olav Peeters wrote:
Hi,
from what I read here:
http
Hi,
from what I read here:
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
... if you are on 3.4.0 AND have NO quota configured, it should be safe
to just replace a version specific /etc/yum.repos.d/glusterfs-epel.repo
with e.g.:
14 matches
Mail list logo