done
-- Original Message --
From: "Oleksandr Natalenko" <oleksa...@natalenko.name>
To: gluster-us...@gluster.org; "David Robinson"
<drobin...@corvidtec.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: 2/22/2016 1:12:01 PM
S
The 3.7.8 FUSE client is significantly slower than 3.7.6. Is this
related to some of the fixes that were done to correct memory leaks? Is
there anything that I can do to recover the performance of 3.7.6?
My testing involved creating a "bigfile" that is 20GB. I then installed
the 3.6.6 FUSE
I am sorting a fairly large file (27-million lines) and the output is
being written to my gluster storage. This seems to crash glusterfsd for
3.7.8 as noted below.
Can anyone help?
David
[Thu Feb 11 18:25:24 2016] glusterfsd: page allocation failure. order:5,
mode:0x20
[Thu Feb 11 18:25:24
anith Kumar Karampuri" <pkara...@redhat.com>
To: "Glomski, Patrick" <patrick.glom...@corvidtec.com>;
gluster-devel@gluster.org; gluster-us...@gluster.org
Cc: "David Robinson" <david.robin...@corvidtec.com>
Sent: 12/21/2015 11:59:33 PM
Subject: Re: [Gluste
sure... I'll setup a watch command to run this at some interval and send
the files.
David
-- Original Message --
From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
To: "David Robinson" <drobin...@corvidtec.com>; "Glomski, Patrick"
<
Niels,
> 1. how is infiniband involved/configured in this environment?
gfsib01bkp and gfs02bkp are connected via infiniband. We are using tcp
transport as I never was able to get RDMA to work.
Volume Name: gfsbackup
Type: Distribute
Volume ID: e78d5123-d9bc-4d88-9c73-61d28abf0b41
Status:
Is there anyway to force a mount of a 3.6 server using a 3.7.6 FUSE
client?
My production machine is 3.6.6 and my test platform is 3.7.6. I would
like to test the 3.7.6 FUSE client but would need for this client to be
able to mount both a 3.6.6 and a 3.7.6 server.
When I try to mount the
having the problem.
Is there a way to check the client version of all clients connected to a
gluster server?
David
-- Original Message --
From: "Ben Turner" <btur...@redhat.com>
To: "David Robinson" <drobin...@corvidtec.com>
Cc: gluster-us...@gluste
I upgraded my storage server from 3.6.3 to 3.6.6 and am now having
issues. My setup (4x2) is shown below. One of the bricks (gfs01a) has
a very high cpu-load even though the load on the other 3-bricks (gfs01b,
gfs02a, gfs02b) is almost zero. The FUSE mounted partition is extremely
slow and
I have a replica pair setup that I was trying to upgrade from 3.7.4 to
3.7.5.
After upgrading the rpm packages (rpm -Uvh *.rpm) and rebooting one of
the nodes, I am now receiving the following:
[root@frick01 log]# gluster volume status
Staging failed on frackib01.corvidtec.com. Please check
about ACL. Don't think I can turn
ACL off on XFS.
I am happy to post results of strace. Do I just do 'strace tar -xPf
boost.tar strace.log'?
David
-- Original Message --
From: Jeff Darcy jda...@redhat.com
To: David Robinson drobin...@corvidtec.com
Cc: gluster-us...@gluster.org
My NFS stopped working after upgrading to 3.6.3. When I do a gluster
volume status homegfs_bkp, it shows as N/A and I cannot mount the volume
using NFS instead of FUSE.
Any suggestions for how to fix?
[root@gfs01bkp glusterfs]# gluster volume status homegfs_bkp
Status of volume: homegfs_bkp
early days.
3.6.3 *should* be better than 3.6.2. David Robinson (CC'd) mentioned
he
saw a changE in client connectivity behaviour though, which is worth
knowing about before hand. I don't know the details, though David
mentioned
he'll send info about it through to the mailing list.
I'd wait
It looks like my issue was due to a change in the way name resolution is
now handled in 3.6.3. I'll send in an explanation tomorrow in case
anyone else is having a similar issue.
David
-- Original Message --
From: David Robinson drobin...@corvidtec.com
To: gluster-us...@gluster.org
Shyam,
These files are linkto files that are created by DHT, which basically mean
the files were either renamed, or the brick layout changed (I suspect the
former to be the cause).
These files should have been deleted when the files that they point to were
deleted, looks like this did
15 matches
Mail list logo