Hi Martin,

Talking about HWM, I would like to have some precisions on how ZMQ 
(release 3) actually works
If I read the zmq_setsockopt man page, in the chapter about ZMQ_SNDHW, 
it is written:

********
The high water mark is a hard limit on the maximum number of outstanding 
messages 0MQ shall queue in memory
for any single peer that the specified socket is communicating with.
********

May be it is due to my level in Shakespeare language, but I do not find 
this sentence 100 % clear.
If I have one pub with n subscribers, do I have only one queue (pipe) or 
do I have n queues?
If my subscribers are splited on several hosts, do I have one queue per 
host?

The tests I did already let me think that there is only one queue but I 
would like to have a confirmation from the expert.

In your proposal, will the rate be common for all subscribers (whatever 
their numbers and the host on which they are running)?

Thank's for your answers

Emmanuel

On 10/12/2011 12:08, Martin Sustrik wrote:
> Hi all,
>
> For a long time I feel that the model of dealing with congestion in
> PUB/SUB pattern is flawed. Issue LIBZMQ-291 have reminded me of the
> problem today, so...
>
> It works the following way right now:
>
> I/O thread is reading messages from the pipe and pushing them to the
> network. If network is not able to accept more data (e.g. TCP
> backpressure is applied) it stops reading messages from the pipe and
> waits till network is ready for accepting more data.
>
> In the application thread, messages are simply pushed to the pipe when
> zmq_send() is called. If the pipe is full (HWM is reached) the message
> is dropped.
>
> The problem with the above approach is that when you send a lot of
> messages is a quick sequence (e.g. sending small messages in a tight
> loop) the messages are stored in the pipe until it is full and the
> subsequent messages are simply dropped. The sender is not even notified
> about the fact that messages are disappearing.
>
> That seems not to be a desirable behaviour.
>
> My proposal thus is to use rate control to control the behaviour. For
> example, user can set ZMQ_RATE to 10Mb/s and is expected to ensure that
> the network is capable of that load.
>
> 1. The I/O thread will extract messages from the pipe at rate of 10Mb/s.
> The network should be able to handle the load, so pushing these messages
> into the network shouldn't be a problem. If it happens to be a problem,
> I/O thread will start dropping messages so that draining of the pipe
> continues at 10Mb/s.
>
> 2. The application thread will push massages to the pipe and if the pipe
> is full it will *block*.
>
> Now consider the example of a tight loop sending small messages (see
> LIBZMQ-291 for an example). First, the loop will fill the pipe until the
> HWM is reached. Trying to send the next message will cause zmq_send() to
> block. The I/O thread will in the meantime push data to the network at
> 10Mb/s. It will read messages from the pipe doing so. The number of
> messages in the pipe will thus drop below the HWM and the sending loop
> will be unblocked and allowed to send more messages.
>
> Thoughts? Does anyone see any problem with this approach?
>
> Also, I am extremely busy these days doing full time consulting, so any
> help would be appreciated. If you know how to implement nice
> leaky-bucket (http://en.wikipedia.org/wiki/Leaky_bucket) data throttling
> in C/C++ and have few hours to spare, let me know.
>
> Martin
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev


_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to