Alan Conway wrote:
On the other hand if there are consumers available and capable of
pulling messages as fast as the producer writes them, we'd expect that
broker never holds very many in memory because it can pass them off
right away.
I think thats not happening now: the broker is stacking up a large
number of messages in a "read burst" which pile up in the deque before
it can write them out again. It looks like we need some form of flow
control within the broker.

Prior to the serializer, the publishing thread dispatched messages directly where consumers were available. This had the effect of slowing down the publication (by tying it to the consumption).

The serializer as currently implemented has a dedicated thread per queue that does the dispatch; the publisher simpling pushing its message onto the deque.

If the serializer's thread can't push out the message as fast as they come in (when consumers are available) then I think we should focus on making that part faster/more efficient (as previously mentioned I think having this work done on the consumer IO thread would allow us to remove a lot of locking and could get us some gains here).

We do want flow control in the broker to allow the size of queues to be managed (where the rate of consumption is slowed by the application rather than the broker code), but in this particular case slowing down the publisher because the process of dispatching to the consumers can't keep up seems like a retrograde step.

Reply via email to