The OIDs for those two graphs are:

New Connections: sysStatClientTotConns.0&sysStatServerTotConns.0:public@<<IP 
Address Here>>:::::2

Established Connections: 
sysStatClientCurConns.0&sysStatServerCurConns.0:public@<<IP Address Here>>:::::2

Other ones I am graphing are:

Client Traffic:  sysStatClientBytesOut.0&sysStatClientBytesIn.0:public@<<IP 
Address Here>>:::::2
Server Traffic:  sysStatServerBytesIn.0&sysStatServerBytesOut.0:public@<<IP 
Address Here>>:::::2
HTTP Requests:  sysStatHttpRequests.0&sysStatHttpRequests.0:public@<<IP Address 
Here>>:::::2
RAM in Use:  ( sysStatMemoryUsed.0&sysStatMemoryUsed.0:public@<<IP Address 
Here>>:::::2 / sysStatMemoryTotal.0&sysStatMemoryTotal.0:public@<<IP Address 
Here>>:::::2 ) * 100

I solved the discontinuity at midnight issue.  SNMP and MRTG are correct - the 
drop in traffic right at midnight is real.  There is nothing in the F5 that is 
responsible for this.  There are policies on the servers that cause certain 
traffic to be reset at midnight.

- Matt


On Dec 19, 2012, at 6:50 PM, Steve Shipway <[email protected]> wrote:

> What are the two OIDs you are graphing?  I assume one of them is the global 
> total connections... a suitable-sanitised snippet of your cfg file would be 
> helpful here.
>  
> I??m particularly interested in this as Im in the process of fully monitoring 
> our F5 ?C I have created a plugin for Nagios and MRTG to pull out not only 
> this data, but also other performance and health data on a global, cluster or 
> per-VIP basis, and am keen to avoid any potential problems (if you want a 
> copy of the beta plugin, it is available on www.nagiosexchange.org)
>  
> A collection of the raw data might help (to see if there really IS a sudden 
> dropoff in connection rate at midnight); also your own knowledge of how the 
> F5 is being used and by what might let you find out about changes in usage 
> patterns at midnight, or any scheduled tasks or resets you have in client 
> machines at that time.  I can??t help you with that of course.
>  
> We run several F5s in multiple clusters and have not observed this sort 
> behaviour either singly or as a cluster, except when a cluster fails over to 
> the other F5 member.  For this reason, I suspect the data are valid, and 
> caused by some other even on your network or application servers.
>  
> Steve 
>  

_______________________________________________
mrtg mailing list
[email protected]
https://lists.oetiker.ch/cgi-bin/listinfo/mrtg

Reply via email to