Hi David, Thanks for the input, and yes this is indeed correct. HOWEVER, I am now storing both COUNTER as well as GAUGE (recommended by Paul) for Incoming and Outgoing traffic in my RRD. Needless to say, sitting with a 16MB RRD file for one single port, is not too efficient, seeing that I will soon be monitoring well over 1,000 Virtual ports. When you do the maths, that will put me with over 10GB of databases alone...
In either event, I do realise I need to "count everything together" Said in the same breath, I did notice that one of the older RRD versions had a SUM() function, but this has been removed from the CDEF language - god knows why... So I suppose now, my question would be *how* do I count everything together in a CDEF? I've tried something like CDEF:Total=PREV(In), In, + But needless to say, I still get the "curve". I also tried using a normal GAUGE to graph the volume of data transferred, but this is *also* wrong, seeing that the GAUGE will graph the actual number in the database, and not the SUM() of the numbers in the database. Thus, should I have a traffic counter standing at 100MB, and that counter on the interface resets for what ever reason, RRD will graph 100MB, and then graph 0MB - which is wrong (100MB was used and when it's used it's used - period). What should I do without a SUM() function???? -- me ----- Original Message ----- From: "David Lovy" <[EMAIL PROTECTED]> To: "Chris Knipe" <[EMAIL PROTECTED]> Sent: Monday, April 21, 2003 2:39 PM Subject: Re: [rrd-users] *shrugs* > Morning Chris... > > I think you're on to something... showing total traffic really requires measuring the area under the curve, not just the highest value dvided by the time. Getting the area under the curve would mean to take the sum of the samples and multiply by the length of time for each sample. -or- sum of samples/number of samples * total duration of samples. More scientifically: > > Definitions: > t1 = time of first sample of range you're interested in. > t2 = time of second sample of range you're interested in. > tn = time of last sample of range you're interesteded in. > n = number of samples between t1 and tn. > > Total Bytes for range = sum((bytes/sec of sample t2) .. (bytes/sec of sample tn)) / n * (tn - t1) sec > > Here's a quick example using simple dummy data: > > 10:00 - 3 MB/sec (this is for the previous sample and doesn't count) > 10:05 - 4 MB/sec > 10:10 - 2 MB/sec > 10:15 - 3 MB/sec > 10:20 - 2 MB/sec > > Total Bytes from 10:00 to 10:20 = (4 + 2 + 3 + 2) / 4 * (1200) > Total Bytes = 3.3 GB! > > This can be simplified if any of the RRAs in RRDTool are already configured for the intervals you want. i.e. For any point in rrdtool, the total bytes is simply the rate (bytes/sec) times the number of seconds for the interval. > > If the five minute average rate is 2MB/sec, the total bytes for those 5 minutes is 2MB * 5 * 60 == 600MB > If the 30 minute average rate is 2MB/sec, the total bytes for those 30 minutes is 2MB * 30 * 60 == 3.6GB > If the 2 hour average rate is 2MB/sec, the total byte for those 2 hours is 2MB * 120 * 60 == 14.4GB > If the daily average rate is 2MB/sec, the total bytes for that day is 2MB * 60 * 60 * 24 == 172.8GB > > Hope this helps... > > P.S. This little exercise gave me a clue of how many CDROMs worth of info I could pump through an E1. (i.e. approximately 12 per hour! :-) > > On Sun, 20 Apr 2003 14:45:28 +0200 > "Chris Knipe" <[EMAIL PROTECTED]> wrote: > > > Lo all, > > > > First, I spend numerous hours browsing the archives, and on google - > > with no luck. Frankly, I don't think google returns a hit that I have > > not visited related to this... > > > > I want to *graph* the *total* MB/GB used on a Interface over a certain > > period of time (not fixed)... > > > > Now, the whole kb/sec * times = Value thing is all nice and fine... > > HOWEVER, when *graphing* that, it doesn't exactly make allot of sense > > when your TOTAL traffic goes up and down and up and down, does it? I > > mean, traffic used is traffic used and it cannot go down (except if the > > counters are reset)... And let's face it.. A interface runs at 500kb/sec > > for 1 hour, and then goes into a idle state for 4 hours... Graphing that > > with kb/sec * time, the graph will go high, and then go down to zero > > (just about)... > > > > I want a way to graph the traffic used per time period, that should look > > very similar to the old mrtg "uptime" graphs for example - similar type > > of graph. Myself, just as allot of others (from what I saw in the > > archives) are unable to do it, and are unable to get a answer from this > > mailing list on how to do it... I know it is possible however, seeing > > that I saw allot of other people *graphing* the totals... So, just how > > is this done??? > > > > If this has been answered before, please just point me to the post in > > the archives... After spending numerous hours searching and browsing, > > I'm rather sure that I could not have missed it. > > > > > > -- > > me > > > > -- > > Unsubscribe mailto:[EMAIL PROTECTED] > > Help mailto:[EMAIL PROTECTED] > > Archive http://www.ee.ethz.ch/~slist/rrd-users > > WebAdmin http://www.ee.ethz.ch/~slist/lsg2.cgi > > > -- > David Lovy CCIE# 2071 > ShoreGroup, Inc. -- Unsubscribe mailto:[EMAIL PROTECTED] Help mailto:[EMAIL PROTECTED] Archive http://www.ee.ethz.ch/~slist/rrd-users WebAdmin http://www.ee.ethz.ch/~slist/lsg2.cgi
