I am periodically seeing a high number of these messages in the client
log - Nothing in the log for the bricks. There appears to be a log entry
for every file in that directory, including sub-directories. I check
getfattr on the bricks and they have the gfid set and both replica brick
gfid's
On 2/5/12 2:09 PM, Stefan Becker wrote:
On the webservers I played around with versions up to 3.2.5, nothing
helps. On the storage server such an upgrade will not be that easy :)
What version of Gluster are the storage servers running? I don't believe
there is much work involved in
Can you post the client logs also? There should be a filename
corresponding to the mountpoint of the gluster volume on the client.
Since you are running a replicate volume, you could try shutting down
gluster on each of the servers in turn and seeing if the write block
only occurs on one of
On 2/5/12 4:12 PM, Ove K. Pettersen wrote:
Hi...
Started playing with gluster. And the heal functions is my target
for testing.
Short description of my test
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory:
I've four systems with multiple 4-way replica volumes. I'm
migrating a number of volumes from Fuse to NFS for performance
reasons.
My first two hosts seem to work nicely, but the other two won't
start the NFS services properly. I looked through the
:27 AM, Bryan Whitehead wrote:
did you start portmap service before you started gluster?
On Sun, Mar 4, 2012 at 11:53 AM, David Coulson da...@davidcoulson.net
mailto:da...@davidcoulson.net wrote:
I've four systems with multiple 4-way replica volumes. I'm
migrating a number of volumes from
I recently moved from the fuse client to NFS - Now I'm seeing a bunch of
this in syslog. Is this something to be concerned about, or is it
'normal' NFS behavior?
NFS: server localhost error: fileid changed
fsid 0:15: expected fileid 0xd88ba88a97875981, got 0x40e476ef5fdfbe9f
I also see a lot
Is there a change log somewhere for 3.2.6 (or the p3 which is available
for QA)?
David
On 3/14/12 3:21 PM, John Mark Walker wrote:
Greetings,
There are 2 imminent releases coming soon to a download server near you:
1. GlusterFS 3.2.6 - a maintenance release that fixes some bugs.
2.
I ended up using the 'nolock' option with NFS - Even with only one
client mounted, I had issues with locking.
On 3/14/12 5:26 PM, Sean Fulton wrote:
We have a four-node, replicated cluster. When using the native gluster
client, we use the local server as the mount point (ie., mount
Is there a FAQ/document somewhere with optimal mkfs and mount options
for ext4 and xfs? Is xfs still the 'desired' filesystem for gluster bricks?
On 3/15/12 3:22 AM, Brian Candler wrote:
On Wed, Mar 14, 2012 at 11:09:28PM -0500, D. Dante Lorenso wrote:
get 50-60 MB/s transfer speeds tops when
Weird - Actually slower than fuse. Does the 'nolock' nfs mount option
make a difference?
On 3/21/12 1:22 PM, Bryan Whitehead wrote:
[root@lab0-v3 ~]# mount -t nfs -o tcp,nfsvers=3 localhost:/images /mnt
[root@lab0-v3 ~]# cd /mnt
[root@lab0-v3 mnt]# time bash -c 'tar xf /root/linux-3.3.tar ;
Gluster relies on DNS and/or /etc/hosts to determine the IP for a
particular cluster member. You can have gluster utilize a different IP
for *new* connections by updating DNS or /etc/hosts to point the cluster
peer name to a new IP.
On 4/21/12 7:31 AM, lejeczek wrote:
helo everybody
this
Do you have any firewall rules enabled? I'd start by disabling iptables
(or at least setting everything to ACCEPT) and as someone else suggested
setting selinux to permissive/disabled.
Why are your nodes and client using different versions of Gluster? Why
not just use the 3.2.6 version for
is already documented in
my post on the Mageia Forum
https://forums.mageia.org/en/viewtopic.php?f=7amp;t=2358amp;p=17517.
Eric Pretorious
Truckee, CA
*From:* David Coulson da...@davidcoulson.net
*To:* Eric epretori
) 100% Mageia.
Eric Pretorious
Truckee, CA
*From:* David Coulson da...@davidcoulson.net
*To:* Eric epretori...@yahoo.com
*Cc:* gluster-users@gluster.org gluster-users@gluster.org
*Sent:* Saturday, May 5
Is there a migration guide from 3.2.5 to 3.3 available?
On 5/31/12 12:33 PM, John Mark Walker wrote:
Today, we're announcing the next generation of GlusterFS
http://www.gluster.org/, version 3.3. The release has been a year in
the making and marks several firsts: the first post-acquisition
I experienced the following going from both 3.2.5 and 3.2.6 (using
'official' gluster packages) on RHEL6.
[root@rhesproddns02 ~]# rpm -Uvh glusterfs-*3.3.0*
Preparing...###
[100%]
1:glusterfs
I upgraded my 3.2.5 environment to 3.3.0 this morning. I'm seeing an approx 4x
increase in network activity since the upgrade. tcpdump indicates a volume
which is pretty much 100% reads has a lot of tcp activity between the nodes.
Since it is mounted NFS, I was expecting that for a 'nearly all
it on another environment this weekend to
get more solid numbers.
David
On 6/1/12 12:28 PM, David Coulson wrote:
I upgraded my 3.2.5 environment to 3.3.0 this morning. I'm seeing an approx 4x
increase in network activity since the upgrade. tcpdump indicates a volume
which is pretty much 100% reads
On 6/1/12 8:14 AM, Kaleb S. KEITHLEY wrote:
If by 'official' gluster packages you mean the glusterfs rpms in the
fedora/epel yum repo, and your 3.2.5 was built from source or using
rpms from somewhere else, including e.g. gluster.org, then your
experience is not unexpected.
I used the
You probably want to blow away your brick filesystem and start clean -
There will be xattr information that is confusing Gluster.
Best practice is to use DNS to support peers, rather than IP addresses.
On 6/1/12 12:27 AM, Костырев Александр Алексеевич wrote:
Hello!
I fired up gluster
I've a volume in a 4 way replica configuration running 3.3.0 - Two
bricks are in one datacenter, two are in the other. We had some sort of
connectivity issue between the two facilities this morning, and
applications utilizing gluster mounts (via NFS; in this case only-read
work load)
Is there a way to change this behavior? It's particularly frustrating
having Gluster mount a filesystem before the service starts up, only to
find it steps on the top end of 1024 ports often - IMAPS and POP3S are
typical victims at 993 and 995.
Why does not not use ports within the
On 6/4/12 4:05 AM, Jacques du Rand wrote:
HI Guys
This all applies to Gluster3.3
I love gluster but I'm having some difficulties understanding some
things.
1.Replication(with existing data):
Two servers in simple single brick replication. ie 1 volume (testvol)
-server1:/data/
For what it is worth, I had weird performance issues when I moved from
3.2.5 to 3.3.0 - I saw increased CPU utilization, as well as drastically
increased network utilization between the nodes with the same workload.
I could never really quantify the difference, other than I noticed my
systems
On 6/17/12 8:21 AM, Sean Fulton wrote:
This was a Linux-HA cluster with a floating IP that the clients would
mount off of whichever server is active. So I set up a two-node
replicated cluster, which the floating IP and heartbeat, and the
client mounted the drive over the floating IP. I'm
On 6/22/12 7:08 AM, Marcus Bointon wrote:
Sorry, I should have said, I'm using 3.2.5 on 64-bit ubuntu lucid. I assume
it's ok to have one client mounted with nfs while the other uses native?
We do this all the time - Works fine.
David
___
No
I saw a patch to have it behave like this, but I can't find it right now.
On 6/28/12 6:54 AM, Tim Bell wrote:
Assuming that we use a 3 copy approach across the hypervisors, does Gluster
favour the local copy on the hypervisor if the data is on
distributed/replicated ?
It would be good to
I've a simple 2-way replica volume, however it's capacity utilization is
really inconsistent. I realize du and df aren't the same thing, but I'm
confused how the brick and the NFS mount are not showing the same amount
of capacity available. Underlying filesystem is XFS, and gluster volume
is
On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
Killing the option to use NFS mounts on localhost is certainly quite
the opposite to my performance needs!
He was saying you can't run kernel NFS server and gluster NFS server at
the same time, on the same host. There is nothing stopping you
.
- Original Message -
From: David Coulson da...@davidcoulson.net
To: Tomasz Chmielewski man...@wpkg.org
Cc: Rajesh Amaravathi raj...@redhat.com, Gluster General Discussion List
gluster-users@gluster.org
Sent: Friday, July 13, 2012 3:16:38 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd
Your gluster brick must be a directory, not a block device. The
filesystem that directory is located on must supported xattr.
David
On 7/19/12 1:16 PM, Sallee, Stephen (Jake) wrote:
I am new to gluster so please be a bit patient with me.
I am trying to setup a gluster volume with the bricks
On 11/18/12 7:53 PM, Whit Blauvelt wrote:
Red Hat does not support upgrades between major versions. Thus Cent OS and
Scientific don't either. That's a major part of why I generally run Ubuntu
or Debian instead, except for users who are really wedded to the Red Hat
way.
I work in an Enterprise
I would be concerned about the connections in a SYN_SENT state. Would be
helpful if this was done with the -n flag so no DNS and we could see the
real IPs.
On 11/21/12 2:49 PM, Steve Postma wrote:
Eco,
they all appear to be using 24007 and 24009, none of them are running on 24010
or 24011.
~]#
From: David Coulson [da...@davidcoulson.net]
Sent: Wednesday, November 21, 2012 3:20 PM
To: Steve Postma
Cc: Eco Willson; gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume
I would be concerned about the connections in a SYN_SENT
Did your unused interfaces come back online?
Sent from my iPad
On Nov 30, 2012, at 8:19 PM, Pat Haley pha...@mit.edu wrote:
Hi,
I have some additional information. I have just installed gluster on
a second client and tried to mount the same volume. The first client
appeared to mount
-6824
Dept. of Mechanical Engineering Fax:(617) 253-8125
MIT, Room 5-213
http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
From: David Coulson [da...@davidcoulson.net
Try making it:
mseas-data:/gdata /gdata glusterfs
defaults,_netdev0 0
Otherwise it'll try to mount too early in the startup sequence.
David
On 12/3/12 8:21 PM, Pat Haley wrote:
Hi,
We have a compute cluster running CentOS 6.2 (installed via
Rocks 6.0) which
On 4/10/13 8:28 AM, Jian Lee wrote:
# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Thu Apr 11 00:09:23 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:1996]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A
39 matches
Mail list logo