On Mon, Jun 25, 2001 at 04:59:22PM -0400, Peter Amstutz wrote:
: 
: The first is the inability to graph decimals between 0..1 when using a
: logrithmic scale.  Now, when an error occurs we want to be able to see
: both huge spikes and small bumps on the graph, so logorithms are the way
: to go.  However, very small events (like a couple dropped packets) end up
: being reduced into small decimals by MRTG (which incorrectly assumes it is
: a rate, but that's another issue) and are not graphed because RRD doesn't
: understand negative logorithms when graphing.

This is a biggie for us as well.  We're trying to graph latency ping
times using RRD, but big spikes (caused, for instance, by backups)
completely destroy the more minor changes at lower ping times.  It's
not really logarithmic scaling that's being done so much as the actual
log of the value appears to be calculated.  For true logarithmic
scaling, what really needs to happen is a minimal unit distance
calculated from the 0 line, and then the log of the unit distances of
the data points used to determine the scaling.

I think.

* Philip Molter
* DataFoundry.net
* http://www.datafoundry.net/
* [EMAIL PROTECTED]

--
Unsubscribe mailto:[EMAIL PROTECTED]
Help        mailto:[EMAIL PROTECTED]
Archive     http://www.ee.ethz.ch/~slist/rrd-users
WebAdmin    http://www.ee.ethz.ch/~slist/lsg2.cgi

Reply via email to