Awesome material,
thanks a lot, I'll read carefully
Luca

On 17/11/11 11:47 PM, Martin Sustrik wrote:
> Hi Luca,
>
>> I was wondering what should be considered a reasonable way to estimate a
>> realistic maximum number of messages per second for a server receiving
>> messages on a tcp zmq_socket.
>>
>> I have a relatively simple server that opens a tcp port (ZMQ_ROUTER) and
>> logic-wise is pretty much a copy of the Multithreaded Server example in
>> the guide. I am unsure how to estimate how many requests per second
>> should I expect maximum on this configuration. I'm guessing the number
>> relates somehow to the kind of network connection and the size of the
>> messages.
>
> If your server does the same thing as "Multithreaded Server" example, 
> the clients are sending requests and receiving replies in a lock-step 
> manner (one request on the fly at a time).
>
> That makes throughput directly dependent on the request execution time 
> and network latency.
>
> For example, if processing the request takes 180 usec and network 
> latency is 10us, you can do at most 5000 requests per second (1000000 
> / (10 + 180 + 10)).
>
> Now say that additional 2 usecs are spent in 0MQ code. The throughput 
> will vary slightly, but the variation would be probably lost in 
> variation introduced by network and request processing per se.
>
> In short, "Multithreaded Server" is not a good scenario for assessing 
> 0MQ performance.
>
>> The network connection is either one or two 1Gb/s or 10Gb/s wires
>> arriving at the machine (depending on the specific hardware in
>> question), yet I'm unsure how the rest of the network topology will
>> affect performance.
>
> The only way to find out is to measure it.
>
>> The messages are of a few kinds, but they fall into three categories:
>> small (few hundred bytes, call it 256, say about 0.1% of the total),
>> tiny (under 32 bytes, say less than 0.5% of the total) and big (around
>> 32MB each, all the rest, about 99% of the total). Assuming that I can
>> load halfway each wire (this seems to be what our systems people
>> indicates as a reasonable thing to design towards), it seems I would get
>> either 50MB/s or 500MB/s on a one link in each of the two cases, which
>> would in turn mean I'd get up to 1.5 or 15 big messages per second (or
>> twice as much on a dual link? or does that need two ports?).
>
> Yes. You'll get 1.5 - 15 msgs/sec but only in the case you are pushing 
> the messages one after another without waiting for a reply. If you 
> switch to lock-step, you have to take sending the reply into account 
> etc. The throughput will be lower of course.
>
>> By the same reasoning, though, it seems I could get up to 200k small
>> messages and up to 2M tiny messages per second, which I suspect is
>> rather hard to achieve in practice, based on some measurements I found
>> online of Apache being unable to accept more than 5k connections per
>> second and Cassandra being unable to give more that 2k answers for
>> direct queries per second.
>
> There's no Apache or Cassandra used inside 0MQ. 2 million small 
> messages per second seem to be a reasonable expectation. See, for 
> example, here:
>
> http://www.zeromq.org/results:ib-tests-v206
>
>> Which brings me back to my original question:
>> are there numbers available about the various overheads, with which one
>> can try and estimate how big the messages should be to optimize the
>> overhead vs transmission time tradeoff? Are guys able to share some real
>> world numbers of this sort, or some indication upon which I can build
>> some intuition around what to expect?
>
> Benchmarking can be pretty tricky. You have to understand very well 
> what exactly you are benchmarking and do the tests in production-like 
> environment to get any meaningful figures.
>
> I've written guidelines once, which can be used as a starting point:
>
> http://www.zeromq.org/whitepapers:measuring-performance
>
> Martin
>
_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to