Re: [Gluster-users] any opensource glusterfs cluster monitor portal??

2012-09-14 Thread 符永涛
Just a webportal to diaplay glusterfs vluster status, for example peer, volume, brick , connected clients etc. email notification interface 2012/9/15 Lonni J Friedman > On Fri, Sep 14, 2012 at 5:26 PM, 符永涛 wrote: > > Dear gluster experts, > > > > We're now evaluating switch from moosefs to glus

[Gluster-users] any opensource glusterfs cluster monitor portal??

2012-09-14 Thread 符永涛
Dear gluster experts, We're now evaluating switch from moosefs to glusterfs. Until now glusterfs works fine but you know moosefs has a cgi monitor portal and I want to monitor glusterfs cluster too. Leave a large cluster without monitor is risky. Any one know is there any open source project to mo

[Gluster-users] Problems adding new server to pool

2012-09-14 Thread Thomas Indelli
Hello, I don't have a great deal of experience yet with Gluster, and I'm having some tremendous difficulties adding an additional (3rd) server to the pool I have set up. This is running Gluster 3.3.0 on RHEL 6.3, using RPMs: # rpm -qa | grep gluster glusterfs-rdma-3.3.0-1.el6.x86_64 glusterfs-s

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-14 Thread John Mark Walker
A note on recent history: There were past attempts to export GlusterFS client mounts over NFS, but those used the GlusterFS NFS service. I believe this is the first instance "in the wild" of someone trying this with knfsd. With the former, while there was increased performance, there would inv

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-14 Thread harry mangalam
Well, it was too clever for me too :) - someone else suggested it when I was describing some of the options we were facing. I admit to initially thinking that it was silly to expect better performance by stacking protocols, but we tried it and it seems to have worked. To your point: the 'clien

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-14 Thread Whit Blauvelt
On Fri, Sep 14, 2012 at 09:41:42AM -0700, harry mangalam wrote: > > > What I mean: > > > - mounting a gluster fs via the native client, > > > - then NFS-exporting the gluster fs to the client itself > > > - then mounting that gluster fs via NFS3 to take advantage of the > > > client-side caching.

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-14 Thread John Mark Walker
- Original Message - > Hi Venky - thank for the link to this translator. I'll take a look > at it, but > right now, we don't have too much trouble with reads - it's the > 'zillions of > tiny writes' problem that's hosing us and the NFS solution gives us a > bit more > headroom. > > We'l

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-14 Thread harry mangalam
Hi Venky - thank for the link to this translator. I'll take a look at it, but right now, we don't have too much trouble with reads - it's the 'zillions of tiny writes' problem that's hosing us and the NFS solution gives us a bit more headroom. We'll be moving this out to part of our cluster to

Re: [Gluster-users] Virtual machines and self-healing on GlusterFS v3.3

2012-09-14 Thread Pranith Kumar Karampuri
Dario, Ok that confirms that it is not a split-brain. Could you post the getfattr output I requested as well?. What is the size of the VM files?. Pranith - Original Message - From: "Dario Berzano" To: "Pranith Kumar Karampuri" Cc: "" Sent: Friday, September 14, 2012 9:42:38 PM Subjec

Re: [Gluster-users] Virtual machines and self-healing on GlusterFS v3.3

2012-09-14 Thread Dario Berzano
# gluster volume heal VmDir info healed Heal operation on volume VmDir has been successful Brick one-san-01:/bricks/VmDir01 Number of entries: 259 Segmentation fault (core dumped) (same story for heal-failed) which seems to be exactly this bug: https://bugzilla.redhat.com/show_bug.cgi?id=836421

Re: [Gluster-users] problems with replication & NFS

2012-09-14 Thread Lonni J Friedman
On Thu, Sep 13, 2012 at 2:11 PM, Adam Brenner wrote: >> What's the correct way to bring up a pre-existing NFS server inside of >> glusterfs, so that its replicated to some new server? It would be >> somewhat hacky to have to write data just to get everything >> replicated. > > From what I recall -

Re: [Gluster-users] Virtual machines and self-healing on GlusterFS v3.3

2012-09-14 Thread Pranith Kumar Karampuri
hi Dario, Could you post the output of the following commands: gluster volume heal VmDir info healed gluster volume heal VmDir info split-brain Also provide the output of 'getfattr -d -m . -e hex' On both the bricks for the two files listed in the output of 'gluster volume heal VmDir info' P

[Gluster-users] Virtual machines and self-healing on GlusterFS v3.3

2012-09-14 Thread Dario Berzano
Hello, in our computing centre we have an infrastructure with a GlusterFS volume made of two bricks in replicated mode: Volume Name: VmDir Type: Replicate Volume ID: 9aab85df-505c-460a-9e5b-381b1bf3c030 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: one-san-01

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-14 Thread Dai, Manhong
Hi Avati, Good news is it seems the problem is solved after I added 'entry-timeout=0'. I will test our production script soon, and keep you updated Bad news is that mount.glusterfs doesn't recognize such an option, I have to tweak into it to make it accept this option. Best, Manhong ___