hi Philip,
When this happens could you post the statedump of the process to see what
is causing this memory usage.
Steps to grab statedump of the process:
1) kill -USR1
2) the file is located at /tmp/glusterdump.
Pranith.
- Original Message -
From: "Philip Poten"
To: gluster-users@
Hello Philippe,
Requesting you to please update about the following information,
a. Glusterfs version
b. volume type.
c. does this recur everytime you try to do disable nfs.
Also, if one tries nfs.disable "on" the nfs process itself goes down and the
nfs.log should report "shutti
On 07/06/12 14:34, Toby Corkindale wrote:
Hi,
I'm trying to find official documentation that describes the procedure
for recovering from a split-brain situation with replicated volumes.
I can find various posts on the mailing list that refer to the version
2.x series, but nothing good for 3.x.
HI
Thank you for your help. The permission problem has been solved by changing the
version to 3.3.
But there is a new problem. I new two volumes. The volume TEST is everything
ok. But when I mount the volume EASEGFS through the nfs .I could not see
anything .
They work ok under the glusterfs a
HI,Amaravathi:
The version I uused is 3.2.5.
lihang
From: Rajesh Amaravathi
Date: 2012-05-31 14:09
To: lihang
CC: gluster-users
Subject: Re: [Gluster-users] GlusterFS permission problem
which version of glusterfs are you using?
Regards,
Rajesh Amaravathi,
Software Engineer, GlusterFS
RedHa
HI,ALL
I found a strange problem.
I setup a GlusterFS between linux and windows by nfs and LDAP.
The windows client has mounted the volume successfully. But there are some
problem with permission .
I can new a file and edit it successfully on the volume at windows.But when I
new a file in the ap
For what it is worth, I had weird performance issues when I moved from
3.2.5 to 3.3.0 - I saw increased CPU utilization, as well as drastically
increased network utilization between the nodes with the same workload.
I could never really quantify the difference, other than I noticed my
systems m
Hi,
we're running a distributed-replicated setup for our images, and while
we use a caching proxy for the hotset, quite a few requests land on
glusterfs (3.2.6 on squeeze). Since glusterfs fuse client experiences
regular hangs which require reboots (I couldnt yet find a solution to
that), we run o
On 06/11/2012 05:52 PM, Fernando Frediani (Qube) wrote:
Was doing some read on RedHat website and found this URL which I wonder if the
problem would have anything to do with this:
http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/ch14s04s08.html
Although b
I have the following appended to gluster logs at around 100kB of logs per
second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch]
0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031;
disk layout - 930576244 - 966367637
Hi,
I have a situation where I'm mounting a gluster volume on several web servers
via NFS. The web servers run Rails applications off the gluster NFS mounts. The
whole thing is running on EC2.
On 3.2.5, starting a Rails application on the web server was sluggish but
acceptable. However, after
On Mon, Jun 11, 2012 at 09:50:50AM +0100, Brian Candler wrote:
> However, when I brought dev-storage2 back online, "gluster volume info" on
> that node doesn't show the newly-created volume.
FYI, this is no longer a problem - I left the servers for a while, and after
I came back, they had synchron
Hi Christian,
In theory it should work, but ability to properly run VMs on Gluster is
something relatively new due the improvements on granular healing so I don't
think it has been extended tested.
I wasn't able to find any people using it in production and those I heard are
using for testing.
Was doing some read on RedHat website and found this URL which I wonder if the
problem would have anything to do with this:
http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/ch14s04s08.html
Although both servers and client are 64 I wonder if somehow this cou
On 06/08/2012 09:16 PM, Gerald Brandt wrote:
Hi,
I created a test volume, deleted it, and can not re-create it.
# gluster volume create nfstest replica 2 transport tcp nfstest1:/nfstest
nfstest2:/nfstest
# gluster volume delete nfstest
# gluster volume create nfstest replica 2 transport tcp nf
Hi,
I tried to disable the NFS service for a volume :
gluster volume set distributed nfs.disable on
It stops the NFS service:
[2012-06-11 11:05:30.142564] I
[glusterd-utils.c:1003:glusterd_service_stop] 0-: Stopping gluster nfs
running in pid: 15490
But since I disabled NFS, I get this error mes
(glusterfs 3.3.0, ubuntu 12.04)
I have two nodes in my test setup: dev-storage1 and dev-storage2.
While dev-storage2 was powered down, I added a new volume "single1" on
dev-storage1 (using only bricks on dev-storage1).
However, when I brought dev-storage2 back online, "gluster volume info" on
th
17 matches
Mail list logo