On 12/01/2015 04:15 AM, Andrus, Brian Contractor wrote:
> All,
>
>
>
> I am seeing it VERY consistently that when I do a ‘gluster peer status’
> or ‘gluster pool list’, the system ‘hangs’ for up to 1 minute before
> spitting back results.
That's interesting, 'gluster peer status' or 'pool lis
All,
I am seeing it VERY consistently that when I do a 'gluster peer status' or
'gluster pool list', the system 'hangs' for up to 1 minute before spitting back
results.
I have 10 nodes all on the same network and currently ZERO volumes or bricks
configured. Just trying to get good performance
- Original Message -
> From: "Uthra R. Rao (GSFC-672.0)[ADNET SYSTEMS INC]"
> To: "Vijay Bellur"
> Cc: gluster-users@gluster.org
> Sent: Monday, November 30, 2015 11:32:33 AM
> Subject: RE: [Gluster-users] Glusterfs - Translators/Performance
>
> Thank you Vijay for taking the time to r
- Original Message -
> From: "Uthra R. Rao (GSFC-672.0)[ADNET SYSTEMS INC]"
> To: "Joe Julian" , gluster-users@gluster.org
> Sent: Monday, November 30, 2015 4:19:05 PM
> Subject: Re: [Gluster-users] Glusterfs - Translators/Performance
>
> Joe,
>
> Thank you.
>
> That's what results yo
Joe,
Thank you.
That's what results you're looking for, but nothing about the workload
---We are in the testing phase and right now the there is no workload.
s/docs/wiki pages for the 3.2ish version/
--- Not sure where to look Please send me the complete URL
Uthra
-Original Message-
I am unable to mount the glusterfs on the client. In the client log file I am
seeing this message every time I try to mount. To troubleshoot this I set:
#gluster volume set gtower auth.allow IPAddress_client
Log messages:
[2015-11-30 20:38:43.670832] I [MSGID: 114057]
[client-handshake.c:1437
On 11/30/2015 08:32 AM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:
Thank you Vijay for taking the time to reply.
What we want is writes to be a bit better than the 10MBps we are seeing but not
as important as read which is good at 400MBps.
That's what results you're looking for, b
Thank you Vijay for taking the time to reply.
What we want is writes to be a bit better than the 10MBps we are seeing but not
as important as read which is good at 400MBps.
Is it better to store settings in /etc/glusterfs/glusterd.vol (as in the docs)
or with "gluster set" (on command line as
Hi!
We're looking for interested users and developers that can spend some
time on setting up tests for Gluster in the CentOS CI infrastructure.
Gluster is part of the CentOS Storage SIG, and that enables us to use
the CentOS CI testing systems.
I would like to have a few volunteers that are willi
Sorry, it's 12/1/2015(December 1, 2015).
--
Regards,
Manikandan Selvaganesh.
- Original Message -
From: "Manikandan Selvaganesh"
To: "Gluster Devel" , gluster-users@gluster.org
Sent: Monday, November 30, 2015 6:04:50 PM
Subject: [Gluster-devel] REMINDER: Gluster Community Bug Triage meet
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Hello,
i tried with a 4-node setup but the effect is the same, it takes down
the cluster when one of the node is offline. I thought even in a 3-node
setup when 2 nodes are online and only one is gone the majority of 2
nodes up vs. 1 node down should not result in a lost quorum?
I have created the g
On 11/30/2015 03:26 PM, Soumya Koduri wrote:
Hi,
But are you telling me that in a 3-node cluster,
quorum is lost when one of the nodes ip is down?
yes. Its the limitation with Pacemaker/Corosync. If the nodes
participating in cluster cannot communicate with majority of them
(quorum is lost),
Hi,
But are you telling me that in a 3-node cluster,
quorum is lost when one of the nodes ip is down?
yes. Its the limitation with Pacemaker/Corosync. If the nodes
participating in cluster cannot communicate with majority of them
(quorum is lost), then the cluster is shut down.
However i
You are hitting the nrpe payload size issue. currently NRPE supports only 1024
bytes as payload. We have to increase the payload size. This issue is being
tracked in nagios tracker http://tracker.nagios.org/view.php?id=564. In the
mean time, you can rebuild nrpe with the patch
http://tracker.na
Hi,
I am trying to use nagios-gluster plugin to monitor my gluster test setup
in Ubuntu 14.04 server.
OS : Ubuntu 14.04
Gluster version : 3.7.6
Nagios version : core 3.5.1
My current setup.
node 1 = nagios monitor server
node 2 = gluster data node with 10 brick (172.16.5.66)
node 3 = gluster da
16 matches
Mail list logo