Maybe it's obvious, but this method ought to be fairly accurate IF the time from one ping to another is very consistent. I don't know the specific cause of the cases where the command is unable to satisfy the request for 1 ping per .001 second. Obviously if that cause leads to variance from one ping to another then the accuracy suffers.


Even if you don't get 1 ping per ms, you might be able to estimate as:
(pings transmitted / time = time per ping)
and
(failover time = time per ping * (pings transmitted - pings received))

Reply via email to