On 04/09/14 00:25, Alan Conway wrote:
On Wed, 2014-09-03 at 12:16 -0400, Michael Goulish wrote:
OK -- I just had a quick talk with Ted, and this makes sense
to me now:

   count *receives* per second.

I had it turned around and was worried about *sends* per second,
and then got confused by issues of fanout.

If you only count *receives* per second, and assume no discards,
it seems to me that you can indeed make a fair speed comparison
between

    sender --> receiver

    sender --> intermediary --> receiver

and

    sender --> intermediary --> {receiver_1 ... receiver_n}

and even

    sender --> {arbitrary network of intermediaries} --> {receiver_1 ... 
receiver_n}

phew.


So I will do it that way.
That's right for throughput, but don't forget latency. A well behaved
intermediary should have little effect on throughput but will inevitably
add latency.

Measuring latency between hosts is a pain. You can time-stamp messages
at the origin host but clock differences can give you bogus numbers if
you compare that to the time on a different host when the messages
arrive. One trick is to have the messages arrive back at the same host
where you time-stamped them (even if they pass thru other hosts in
between) but that isn't always what you really want to measure. Maybe
there's something to be done with NNTP, I've never dug into that. Have
fun!

To get a reasonably good estimate of the time difference between sender an receiver, one could exchange several timestamped messages, w/o intermediary, in both directions and get both sides to agree on the difference between them. Do that before the test, and then repeat the exchange at the end of the test to check for the drift. This of course assumes stable network latencies during these exchanges and is usable only in test environments. Exchanging several messages instead of just one should help eliminating sporadic instabilities.

Leon

Reply via email to