In that case, the input rate was 20,000 msgs/s, for 10,000 msgs, so: Latency = 10,000 / 20,000 = 500ms
That would be 500ms approx, to fully deliver each message to all 10 clients. However, this is a pretty loose usage of Little's Law, because a 10,000 msg test would not maintain a queue depth of 10,000 msgs on the broker, and the latency being timed per message would be from the moment the message was timestamped and sent on its way, rather than right from the beginning of the test case. So that 500ms is more like an estimate of how long the entire test would take to run, and the 50ms estimate is probably a bit more ball-park for the per message latency figure. It was just a guestimate. Trying to apply Little's Law, given tests that run in batches, gets pretty confusing... On 08/11/2007, Robert Greig <[EMAIL PROTECTED]> wrote: > > On 08/11/2007, Rupert Smith <[EMAIL PROTECTED]> wrote: > > > Waiting events = Throughput * Latency > > > > Latency = 10,000 / 200,000 = 1/20 = 50ms. > > Is that valid when the 10 clients are handled in parallel? i.e we > delivered 20k messages to *each client* in a second of wall time. > > RG >
