To save time in additional stack traversals. That’s where things get slow. 
Plus, the batching algorithm could (potentially) be tuned for different 
workloads and exposed as a setsockopt for additional flexibility (though no one 
has done this yet).


On Jan 16, 2014, at 8:53 AM, Lindley French <[email protected]> wrote:

> Maybe I'm missing something, but what purpose is there in disabling Nagle's 
> algorithm, only to then re-implement the same concept one layer higher?
> 
> 
> On Thu, Jan 16, 2014 at 9:15 AM, Charles Remes <[email protected]> wrote:
> Nagle’s algo is already disabled in the codebase (you can confirm that with a 
> quick grep). I think what Bruno is referring to is that zeromq batches small 
> messages into larger ones before sending. This improves throughput at the 
> cost of latency as expected.
> 
> Check out the “performance” section of the FAQ for an explanation:  
> http://zeromq.org/area:faq
> 
> 
> On Jan 16, 2014, at 7:04 AM, Lindley French <[email protected]> wrote:
> 
>> Ah, that would explain it, yes. It would be great to have a way of disabling 
>> Nagle's algorithm (TCP_NODELAY sockopt).
>> 
>> 
>> On Thu, Jan 16, 2014 at 4:24 AM, Bruno D. Rodrigues 
>> <[email protected]> wrote:
>> Without looking at the code I assume ØMQ is not trying to send each 
>> individual message as a TCP PDU but instead, as the name implies, queues 
>> messages so it can batch them together and get the performance.
>> 
>> This then means the wire will be filled up when some internal buffer fills, 
>> or after a timeout, which looks like 100ms.
>> 
>> On the other hand I can’t see any setsockopt to configure this possible 
>> timeout value.
>> 
>> Any feedback from someone else before I have time to  look at the code?
>> 
>> On Jan 15, 2014, at 16:20, Lindley French <[email protected]> wrote:
>> 
>> > I have a test case in which I'm communicating between two threads using 
>> > zmq sockets. The fact that the sockets are in the same process is an 
>> > artifact of the test, not the real use-case, so I have a TCP connection 
>> > between them.
>> >
>> > What I'm observing is that a lot of the time, it takes ~100 milliseconds 
>> > between delivery of a message to the sending socket and arrival of that 
>> > message on the receiving socket. Other times (less frequently) it is a 
>> > matter of microseconds. I imagine this must be due to some kernel or 
>> > thread scheduling weirdness, but I can't rule out that it might be due to 
>> > something in 0MQ.
>> >
>> > If I follow the TCP socket write with one or more UDP writes using 
>> > Boost.Asio, the 100 millisecond delay invariably occurs for the ZMQ TCP 
>> > message but the UDP messages arrive almost instantly (before the TCP 
>> > message).
>> >
>> > My design requires that the TCP message arrive before *most* of the UDP 
>> > messages. It's fine if some come through first----UDP is faster after all, 
>> > that's why I'm using it----but this big of a delay is more than I counted 
>> > on, and it's concerning. I don't know if it would apply across a real 
>> > network or if it's an artifact of testing in a single process.
>> >
>> > Any insights?
>> > _______________________________________________
>> > zeromq-dev mailing list
>> > [email protected]
>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> [email protected]
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> [email protected]
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to