Just a webportal to diaplay glusterfs vluster status, for example peer,
volume, brick , connected clients etc.
email notification interface
2012/9/15 Lonni J Friedman
> On Fri, Sep 14, 2012 at 5:26 PM, 符永涛 wrote:
> > Dear gluster experts,
> >
> > We're now evaluating switch from moosefs to glus
Dear gluster experts,
We're now evaluating switch from moosefs to glusterfs. Until now glusterfs
works fine but you know moosefs has a cgi monitor portal and I want to
monitor glusterfs cluster too. Leave a large cluster without monitor is
risky.
Any one know is there any open source project to mo
Hello,
I don't have a great deal of experience yet with Gluster, and I'm having some
tremendous difficulties adding an additional (3rd) server to the pool I have
set up. This is running Gluster 3.3.0 on RHEL 6.3, using RPMs:
# rpm -qa | grep gluster
glusterfs-rdma-3.3.0-1.el6.x86_64
glusterfs-s
A note on recent history:
There were past attempts to export GlusterFS client mounts over NFS, but those
used the GlusterFS NFS service. I believe this is the first instance "in the
wild" of someone trying this with knfsd.
With the former, while there was increased performance, there would inv
Well, it was too clever for me too :) - someone else suggested it when I was
describing some of the options we were facing. I admit to initially thinking
that it was silly to expect better performance by stacking protocols, but we
tried it and it seems to have worked.
To your point:
the 'clien
On Fri, Sep 14, 2012 at 09:41:42AM -0700, harry mangalam wrote:
> > > What I mean:
> > > - mounting a gluster fs via the native client,
> > > - then NFS-exporting the gluster fs to the client itself
> > > - then mounting that gluster fs via NFS3 to take advantage of the
> > > client-side caching.
- Original Message -
> Hi Venky - thank for the link to this translator. I'll take a look
> at it, but
> right now, we don't have too much trouble with reads - it's the
> 'zillions of
> tiny writes' problem that's hosing us and the NFS solution gives us a
> bit more
> headroom.
>
> We'l
Hi Venky - thank for the link to this translator. I'll take a look at it, but
right now, we don't have too much trouble with reads - it's the 'zillions of
tiny writes' problem that's hosing us and the NFS solution gives us a bit more
headroom.
We'll be moving this out to part of our cluster to
Dario,
Ok that confirms that it is not a split-brain. Could you post the getfattr
output I requested as well?. What is the size of the VM files?.
Pranith
- Original Message -
From: "Dario Berzano"
To: "Pranith Kumar Karampuri"
Cc: ""
Sent: Friday, September 14, 2012 9:42:38 PM
Subjec
# gluster volume heal VmDir info healed
Heal operation on volume VmDir has been successful
Brick one-san-01:/bricks/VmDir01
Number of entries: 259
Segmentation fault (core dumped)
(same story for heal-failed) which seems to be exactly this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=836421
On Thu, Sep 13, 2012 at 2:11 PM, Adam Brenner wrote:
>> What's the correct way to bring up a pre-existing NFS server inside of
>> glusterfs, so that its replicated to some new server? It would be
>> somewhat hacky to have to write data just to get everything
>> replicated.
>
> From what I recall -
hi Dario,
Could you post the output of the following commands:
gluster volume heal VmDir info healed
gluster volume heal VmDir info split-brain
Also provide the output of 'getfattr -d -m . -e hex' On both the bricks for the
two files listed in the output of 'gluster volume heal VmDir info'
P
Hello,
in our computing centre we have an infrastructure with a GlusterFS volume
made of two bricks in replicated mode:
Volume Name: VmDir
Type: Replicate
Volume ID: 9aab85df-505c-460a-9e5b-381b1bf3c030
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: one-san-01
Hi Avati,
Good news is it seems the problem is solved after I added 'entry-timeout=0'. I
will test our production script soon, and keep you updated
Bad news is that mount.glusterfs doesn't recognize such an option, I have to
tweak into it to make it accept this option.
Best,
Manhong
___
14 matches
Mail list logo