here are our logs when nfs is crashing
[2011-06-10 08:54:14.900049] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID: f8851fc2,
ACCESS: NFS: 0(Call completed successfully.), POSIX: 0(Success)
[2011-06-10 08:54:14.902002] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc:
Hi Simon,
This issue is a variant of Bug 801
(http://bugs.gluster.com/show_bug.cgi?id=801).http://bugs.gluster.com/show_bug.cgi?id=801%29.
Mercurial is accessing file /uwsgi/.hg/store/00changelog.i.a using two fds.
Based on default policy of glusterfs whether to use direct-io-mode or
We'll need the crash stack trace also.
Christopher Anderlik wrote:
here are our logs when nfs is crashing
[2011-06-10 08:54:14.900049] D [nfs3-helpers.c:2424:nfs3_log_common_res]
0-nfs-nfsv3: XID: f8851fc2, ACCESS: NFS: 0(Call completed
successfully.), POSIX: 0(Success)
[2011-06-10
Dear community,
I have a 2-node gluster cluster with one replicated volume shared to a
client via NFS. If the replication link (Ethernet crossover cable)
between the Gluster nodes breaks, I discovered that my whole storage is
not available anymore.
I am using Pacemaker/corosync with two
Hello,
I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a
first point, i've been unable to start a volume featuring 128bricks (64 ok)
Then, due to the round-robin scheduler, as the number of nodes increase
(every node is also a brick), the performance of an application on
Hello,
I have a PHP web application that uses Gluster to store its files.
There are a few areas of the application that perform multiple
operations on small to medium size files (such as .jpg and .pdf files)
in quick succession. My dev environment does not use Gluster and had
no problems - but
Hi All,
I upgraded glusterfs 2.0.8 to glusterfs 3.2 on CentOS
5.6(2.6.18-238.12.1.el5xen x86_64 GNU/Linux ). We have three glusterfs
servers, using replication volumes.
We use the version 2.0.8's configure files, and it works for us. But there
is a problem, clients can't read some
Can you please share NFS and brick logs from the duration of the link going
down? Gluster should have worked in the situation you described.
Avati
On Fri, Jun 10, 2011 at 3:27 PM, Daniel Manser dan...@clienta.ch wrote:
Dear community,
I have a 2-node gluster cluster with one replicated
Hi Francois,
Answers inline.
On Wed, Jun 8, 2011 at 6:10 PM, Francois THIEBOLT thieb...@irit.fr wrote:
Hello,
I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a
first point, i've been unable to start a volume featuring 128bricks (64 ok)
This looks similar to the bug
hi shehjar,
do these logs help you?
if you need further information - just tell me
thx
christopher
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Can you please share NFS and brick logs from the duration of the link
going down? Gluster should have worked in the situation you
described.
Brick log on gluster1:
[2011-06-10 13:12:08.57634] W [socket.c:204:__socket_rwv]
0-tcp.vmware-server: readv failed (Connection timed out)
[2011-06-10
On Wednesday 08 June 2011 06:10 PM, Francois THIEBOLT wrote:
Hello,
I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a
first point, i've been unable to start a volume featuring 128bricks (64 ok)
Then, due to the round-robin scheduler, as the number of nodes increase
(every
Do you find anything in the client logs?
On Fri, Jun 10, 2011 at 3:20 AM, Alan Zapolsky a...@droptheworld.comwrote:
Hello,
I have a PHP web application that uses Gluster to store its files.
There are a few areas of the application that perform multiple
operations on small to medium size
Lock translator cleans up all locks held by a client when it disconnects. It
is the client's responsibility to re-acquire locks which it had held before
disconnection.
Avati
On Tue, Jun 7, 2011 at 12:36 PM, Cheng Shao zchs...@gmail.com wrote:
Hi everyone,
I just want to understand how the
As so far I have 15 servers feeding data to my four node Gluster-FS
(cluster), at this current time all of my scripts are faceted to mount
the volume and move data into its directory, only issue I see with
this is that half of these clients should not be able to go one level
back from its 'home'
hi,
i'm seeing this warning a *lot* in my logs. this is on 3.1.3 running
dist-repl on 4 servers. ie,
[2011-06-10 17:06:08.326245] W [server-resolve.c:565:server_resolve]
0-glustervol1-server: pure path resolution for /production/seed/env/boot
(OPEN)
[2011-06-10 17:06:08.327092] W
Hi Amar.
Is there a projected release date for 3.1.5 and 3.2.1?
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Friday, May 27, 2011 1:42 PM
To: 'Amar Tumballi'
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Files
On Jun 10, 2011, at 1:11 PM, Burnash, James wrote:
Hi Amar.
Is there a projected release date for 3.1.5 and 3.2.1?
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Friday, May 27, 2011 1:42 PM
To: 'Amar Tumballi'
Hi Jeff and Gluster users.
Question about inconsistent looking attributes on brick directories on my
Gluster backend servers.
http://pastebin.com/b964zMu8
What stands out here is that the two original servers (jc1letgfs17 and 18) only
show attributes of 0x for every
19 matches
Mail list logo