On 04/09/14 01:34, Alan Conway wrote:
On Thu, 2014-09-04 at 00:38 +0200, Leon Mlakar wrote:
On 04/09/14 00:25, Alan Conway wrote:
On Wed, 2014-09-03 at 12:16 -0400, Michael Goulish wrote:
OK -- I just had a quick talk with Ted, and this makes sense
to me now:

    count *receives* per second.

I had it turned around and was worried about *sends* per second,
and then got confused by issues of fanout.

If you only count *receives* per second, and assume no discards,
it seems to me that you can indeed make a fair speed comparison
between

     sender --> receiver

     sender --> intermediary --> receiver

and

     sender --> intermediary --> {receiver_1 ... receiver_n}

and even

     sender --> {arbitrary network of intermediaries} --> {receiver_1 ... 
receiver_n}

phew.


So I will do it that way.
That's right for throughput, but don't forget latency. A well behaved
intermediary should have little effect on throughput but will inevitably
add latency.

Measuring latency between hosts is a pain. You can time-stamp messages
at the origin host but clock differences can give you bogus numbers if
you compare that to the time on a different host when the messages
arrive. One trick is to have the messages arrive back at the same host
where you time-stamped them (even if they pass thru other hosts in
between) but that isn't always what you really want to measure. Maybe
there's something to be done with NNTP, I've never dug into that. Have
fun!

To get a reasonably good estimate of the time difference between sender
an receiver, one could exchange several timestamped messages, w/o
intermediary, in both directions and get both sides to agree on the
difference between them. Do that before the test, and then repeat the
exchange at the end of the test to check for the drift. This of course
assumes stable network latencies during these exchanges and is usable
only in test environments. Exchanging several messages instead of just
one should help eliminating sporadic instabilities.

As I understand it that's pretty much what NTP does.
http://en.wikipedia.org/wiki/Network_Time_Protocol says that NTP "can
achieve better than one millisecond accuracy in local area networks
under ideal conditions." That doesn't sound good enough to measure
sub-millisecond latencies. I doubt that a home grown attempt at timing
message exchanges will do better than NTP :( NTP may deserve further
investigation however, Wikipedia probably makes some very broad
assumptions about what your "ideal network conditions" are, its possible
that it can be tuned better than that.

I can easily get sub-millisecond round-trip latencies out of Qpid with a
short message burst:
qpidd --auth=no --tcp-nodelay
qpid-cpp-benchmark --connection-options '{tcp-nodelay:true}' -q1 -m100
send-tp recv-tp l-min   l-max   l-avg   total-tp
38816   30370   0.21    1.18    0.70    3943

Sadly if I tell qpidd to use AMQP 1.0 (and therefore proton), things
degenerate very badly from a latency perspective.
qpid-cpp-benchmark --connection-options '{tcp-nodelay:true,protocol:amqp1.0}' 
-q1 -m100
send-tp recv-tp l-min   l-max   l-avg   total-tp
26086   19552   3.13    6.65    5.28    913
        
However this may not be protons fault, the problem could be in qpidd's
AMQP1.0 adapter layer. I'm glad to see that we're starting to measure
these things for proton and dispatch, that will surely lead to
improvement.

Yes, you are correct, that's basically what NTP does ... and neither will work well with sub-millisecond ranges. I didn't realize that this is what you are after.

There is a beast called http://en.wikipedia.org/wiki/Precision_Time_Protocol, though. A year ago we took a brief look into this but concluded that millisecond accuracy was good enough and that it was not worth the effort.

And of course, it is also possible to attach a GPS receiver to both sending and receiving host. With decent drivers this should provide at least microsecond accuracy.

Leon

Reply via email to