Frederico, thanks again for your insight. My question about gmetad being up
to date became irrelevant in the context of your answer about gmetad not
catching up. Since it only ever pulls the latest cluster state and doesn't
try to fill in the gaps, it is always up to date, as you pointed out.

Has any thought been given to making the tracking over time feature robust
like ?

I'm asking in the context of a project we're considering to bolt an alerting
daemon onto Ganglia. Originally I thought it would make sense to built it
like the web front end. Read the gmetad database and then scan it for
problems. Then I realized that certain kinds of problems, like disk errors
are transient. So the case that I'm concerned about is that a disk error
occurs at a moment when gmetad isn't pulling cluster state, and so it
doesn't get noticed. It seems this approach works well for metrics that
aren't transient, like application or node status. But for metrics that are
essentially events, another collection method probably makes more sense.

We could build an alert engine like gmetad itself. Pull just the latest
cluster state from some cooperating gmond and apply condition tests to that.
Any comment on which approach is better?

Is there an open source implementation of LogCaster or TNT Elm type
functionality for Linux? How do you find out when a disk error occurs?

Jonathan

-----Original Message-----
From: Federico Sacerdoti [mailto:[EMAIL PROTECTED]
Sent: Friday, December 06, 2002 1:22 PM
To: [EMAIL PROTECTED]
Cc: [email protected]
Subject: Re: [Ganglia-developers] One more question


I'll try to answer all of these.

On Thursday, December 5, 2002, at 09:23 PM, [EMAIL PROTECTED] 
wrote:

> Frederico & Steven, I really apprceciate your thoughts about the 
> Ganglia front-end architecture.
>
> I have one more question. Is gmetad robust? If I've got this right, 
> gmond maintains only the lastest metric values received for the 
> cluster. If all of the gmetads go down, aren't all the values during 
> that time period lost forever?

If a gmetad goes down, it stops recording metric value history. When it 
comes up, this will show as a gap in the graphs.

> If at least one gmetad stays up, then when others come up and pull the 
> xml description from the gmetad that survived, will they merge all of 
> the values missing from their own rrd?

This does not happen. Gmetad's are not robust the way gmonds are. They 
do not attempt to "bring newcomers up to date" as gmond does. This has 
to do with security: how do we know you deserve the old data? With 
gmond, the security is implicit in being part of the multicast channel.

Also, the rrds are very timestamp sensitive. Even if we did give a 
recovering gmetad data for its gaps, small clock skews would make the 
graphs look terrible. Not that this isn't something we could overcome 
with careful engineering, however. Our assumption is that gmetads are 
running on dedicated monitoring hardware that is hand administered and 
possibly redundant. If a gmetad goes down, an operator can copy the rrd 
files from a surviving gmetad to fill in the gaps. However in practice, 
gaps are not that big of a deal, and don't degrade performance or 
correctness like a gmond failure does.

> If so, how do you know at any given time whether a particular gmetad 
> is up to date?

A gmetad always makes graphs based on fresh data. If it is drawing 
anything on the left side of a graph, it is up to date. Otherwise it is 
dead. If there are gaps in the graph, it means the gmetad was down for 
that period of history. I may be misunderstanding your question here.

> What advice would you give in terms of the gmetad to gmond ratio? For 
> maximum redunancy, should every node run both gmond and gmetad?

Since keeping metric history with RRD databases is computation and I/O 
intensive, I would not suggest this. We keep a gmetad service running 
on the frontend node of a cluster, that is one gmetad for the cluster.
>
> Jonathan
>
>

Hope this helps,
Federico

Rocks Cluster Group, Camp X-Ray, SDSC, San Diego
GPG Fingerprint: 3C5E 47E7 BDF8 C14E ED92  92BB BA86 B2E6 0390 8845

Reply via email to