On Dec 20, 2011, at 8:12 AM, Massimiliano della Rovere wrote:

> On Mon, Dec 19, 2011 at 18:42, Chuck Remes <[email protected]> wrote:
>> This is a FAQ. Please read it here:
>> 
>> http://www.zeromq.org/area:faq
>> 
>> inproc works "exactly" because there are no kernel buffers in use.
>> 
> 
> Nothing changed: tcp:// is still "not working exactyl", whilst "inproc://" is.

You are seeing the normal behavior. Let me paste the FAQ entry here in case you 
couldn't find it. The important part is about TCP backpressure.

> The I/O thread reads messages from the pipe and pushes them to the network. 
> If network is not able to accept more data (e.g. TCP backpressure is applied) 
> it stops reading messages from the pipe and waits until the network is ready 
> for accepting more data.
> 
> In the application thread, messages are simply pushed to the pipe when 
> zmq_send() is called. If the pipe is full (HWM is reached) the message is 
> dropped.
> 
> The problem with the above approach is that when you send a lot of messages 
> is a quick sequence (e.g. sending small messages in a tight loop) the 
> messages are stored in the pipe until it is full and the subsequent messages 
> are simply dropped. The sender is not even notified about the fact that 
> messages are disappearing.
> 
> The main core developer is hopeful that some community members will volunteer 
> to assist in replacing this mechanism with a rate flow control mechanism.
> 

Setting SNDBUF and RCVBUF to 1 byte is not going to be honored by any kernel 
that I know of so your test is invalid. Your test code will be able to continue 
sending messages until the kernel buffers *for both send and receive* are 
filled and the message queue reaches SNDHWM messages. Then it will block.

If you increase the number of messages you are using in your test, you *will* 
see it block. There is *no way* for HWM to enforce the limit exactly when using 
TCP because of the involvement of kernel buffers. The inproc transport does not 
use kernel buffers (it swaps pointers in memory) so it is able to enforce HWM 
exactly; there is never any message "in transit" with inproc whereas with TCP 
and IPC the message goes to a send buffer and a receive buffer first.

If you do not understand this, please explain which parts of the answer are 
confusing.

cr


_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to