Try 3.3.0 - 3.2.6 has issues with NFS in general (memory leaks, etc).
-Original Message-
From: Tomasz Chmielewski man...@wpkg.org
Date: Thursday, 12 July 2012 5:56 PM
To: Gluster General Discussion List gluster-users@gluster.org
Subject: [Gluster-users] NFS mounts with glusterd on
On 07/13/2012 02:59 PM, James Kahn wrote:
Try 3.3.0 - 3.2.6 has issues with NFS in general (memory leaks, etc).
Upgrading to 3.3.0 would be quite a big adventure to me (production
site, lots of traffic etc.). But I guess it would be justified, if it
really fixes this bug.
The issue was
Original Message -
From: Tomasz Chmielewski man...@wpkg.org
To: James Kahn jk...@idea11.com.au
Cc: Gluster General Discussion List gluster-users@gluster.org
Sent: Friday, July 13, 2012 1:51:15 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable
or not?
On
On 07/13/2012 05:08 PM, Rajesh Amaravathi wrote:
The issue was reported earlier, but I don't see any references it was
fixed in 3.3.0:
Deadlock happens when writing a file big enough to fill the
filesystem cache and kernel is trying to flush it to free some
memory for
On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
Killing the option to use NFS mounts on localhost is certainly quite
the opposite to my performance needs!
He was saying you can't run kernel NFS server and gluster NFS server at
the same time, on the same host. There is nothing stopping you
On 07/13/2012 05:46 PM, David Coulson wrote:
On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
Killing the option to use NFS mounts on localhost is certainly quite
the opposite to my performance needs!
He was saying you can't run kernel NFS server and gluster NFS server at
the same time, on
Actually, if you want to mount *any* nfs volumes(of Gluster) OR
exports (of kernel-nfs-server), you cannot do it with locking on
a system where a glusterfs(nfs process) is running(since 3.3.0).
However, if its ok to mount without locking, then you should be
able to do it on localhost.
Regards,
Was that introduced by the same person who thought that binding to
sequential ports down from 1024 was a good idea?
Considering how hard RedHat was pushing Gluster at the Summit a week or
two ago, it seems like they're making it hard for people to really
implement it in any capacity other
I hope you do realize that two NLM implementations of the same version
cannot operate simultaneously in the same machine. I really look forward
to a solution to make this work, that'd be something.
Regards,
Rajesh Amaravathi,
Software Engineer, GlusterFS
RedHat Inc.
- Original Message
You can only mount sub dirs via NFS:
[root@login1-dev ~]# mount -t glusterfs
storage0-dev.cssd.pitt.edu:/vol_home /mnt
[root@login1-dev ~]# echo $?
0
[root@login1-dev ~]# umount /mnt
[root@login1-dev ~]# mount -t glusterfs
storage0-dev.cssd.pitt.edu:/vol_home/cssd /mnt
Mount failed. Please
Jeff Darcy jdarcy@... writes:
Heh. I knew there'd be a follow-up. I seriously couldn't think of anything
more useful to say before, and couldn't resist the opportunity to balance my
usual verbosity with a dose of brevity, but I'll be glad to address further
questions as best I can.
Hello,
Just thought I'd report it here: this is 3.3.0 under Ubuntu 12.04.
root@dev-storage1:~# gluster volume heal safe
Heal operation on volume safe has been successful
root@dev-storage1:~# gluster volume heal safe full
Heal operation on volume safe has been successful
root@dev-storage1:~# gluster
12 matches
Mail list logo