On Mon, Sep 08, 2008 at 04:17:20PM -0700, Bernard Li wrote:
On Mon, Sep 8, 2008 at 4:12 PM, Lee Amy [EMAIL PROTECTED] wrote:
Thank you all. My cluster uses Cent OS 4. And your description is really
clear, thank you very much!
In the future you probably want to hit 'reply-all' when
On Mon, Sep 08, 2008 at 01:42:16PM -0500, Ryan Robertson wrote:
I too am having trouble getting the gmond collector report data of
itself.
presume that you are referring to some other report from ganglia 3.1
not being able to get its own data here based on the subject, but
the behaviour
The Ganglia Project (http://ganglia.info) is pleased to announce the
official release of Ganglia 3.1.1 The official tarball is available for
immediate download at:
http://sourceforge.net/project/showfiles.php?group_id=43021package_id=35280release_id=625044
For a full description of the bug
Some questions regarding upgrading:
1. Can gmond 3.1.1 nodes coexist compatibly in the same cluster with
gmond 3.1.0 nodes?
2. Can a gmetad 3.1.1 use gmond 3.1.0 nodes as data sources?
Can a gmetad 3.1.0 use gmond 3.1.1 nodes as data sources?
-- Cos
On Tue, Sep 09, 2008 at 01:53:43PM -0400, Ofer Inbar wrote:
Some questions regarding upgrading:
was going to realign the release_page to make that a little more obvious but
it was a little late to do any major changes to it, which is why it is not
explicitly there.
1. Can gmond 3.1.1 nodes
My goal was to have multiple nodes reporting to a central location (
10.50.54.31) also running gmond and reporting info on itself as well. To
accomplish this, wouldn't I configure the clients that will be sending data
something to this effect:
--
/* Feel free to
Hi
I am testing ganglia in a cluster of linux but we are getting this
confusing peaks in the bytes/s and in the packets/s (image attached)
I just look into the linux code and compare with the unix code
(libmetrics/.../metrics.c), looks like the unix code has more thoughts
behind
So my
Hi Roger:
On Tue, Sep 9, 2008 at 11:44 AM, Escobio, Roger [EMAIL PROTECTED] wrote:
I am testing ganglia in a cluster of linux but we are getting this
confusing peaks in the bytes/s and in the packets/s (image attached)
Looks like the image didn't make it.
Thanks,
Bernard
Yes it did, but it was blocked by mailman so I cancel the message
Here is the same image but smaller
I did the change in the linux code (import the counterdiff function) and
I am testing it now
thanks
Roger Pena Escobio
GFI Grid SA Support team
work phone: 905 212
I am testing ganglia in a cluster of linux but we are getting this
confusing peaks in the bytes/s and in the packets/s (image attached)
I have been able to minimize this significantly by using code from svn trunk
and building with
make CPPFLAGS=-DREMOVE_BOGUS_SPIKES
IMHO, that should
On Tue, Sep 09, 2008 at 01:07:10PM -0500, Ryan Robertson wrote:
My goal was to have multiple nodes reporting to a central location
(10.50.54.31) also running gmond and reporting info on itself as well.
then you need all gmond configured with the same cluster name and setup to
use unicast
Hello all,
I just installed ganglia on our cluster and I was wondering if there is an easy
way to change the y metrics on the load graphs to percentages instead of m?
Ganglia rocks btw :)
~Mike
-
This SF.Net
Michael Henderson [EMAIL PROTECTED] wrote:
Subject: [Ganglia-general] cpu load percentages instead of 100m?
I just installed ganglia on our cluster and I was wondering if there
is an easy way to change the y metrics on the load graphs to
percentages instead of m?
Your request is ambiguous:
Hi Michael:
On Tue, Sep 9, 2008 at 4:39 PM, Ofer Inbar [EMAIL PROTECTED] wrote:
Your request is ambiguous: There are cpu metrics, and system load
metrics, but there's no such thing as cpu load metrics. These are
different sets of data and mean different things.
cpu metrics are already
Hi Michael:
On Tue, Sep 9, 2008 at 8:31 PM, Michael Henderson [EMAIL PROTECTED] wrote:
Well, in case I didn't mention it before, I'm a ganglia newb. I was
mistaken about which graphs were showing what... All the graphs show
exactly what I need them to lol... of course! MAJOR props to the
15 matches
Mail list logo