Peter,

Thank you for your reply.

Peter Kristolaitis wrote:
You may want to look into using the master/slave functionality to
distribute the load rather than making more probes on a single host.
Yes, we have looked at doing a master/slave setup, so we may try that in the future.


Also, disks aren't faster just by virtue of being on a SAN.   Similarly
configured volumes (same number of disks, same RAID level, same amount
of controller cache) are almost invariably faster by using DAS, as you
don't have the added latency of the FC (or even worse, iSCSI) packet
switching network (plus SAN typically has shared bus and cache).
I didn't mean to imply that SAN is automatically faster than internal drives, but ours would be based on our array and RAID level normally used. We would need to convince our SAN admins to give us some disk however.


Is there a real, technical reason to use exactly 30 pings?  For example,
do you need that level of granularity for % loss?  Could you get by with
15 or 20?  You'd still get host up/down notifications, and improved
performance, at a cost of less granularity for loss % -- which often
(though not always) isn't an important metric when monitoring branches
(2 packets lost out of 30 = 5.7%, 2 packets out of 20 = 10% -- does that
difference actually matter in your case?  Could it be compensated for by
slightly increasing loss % thresholds for alerts?)
We would like to have as many pings in the shortest interval we can, because branch connections could be down for a few minutes and we would only see some packet loss over the 5 minute interval instead of seeing the node actually down. Last time I reconfigured, the network group had requested more granularity (maybe they just wanted to see more smoke on the graphs), but I think what they really wanted this whole time was a smaller interval.


_______________________________________________
smokeping-users mailing list
[email protected]
https://lists.oetiker.ch/cgi-bin/listinfo/smokeping-users

Reply via email to