Thanks a lot for the changes and the explanation! Will try this workaround for now. When the new queue threading model is available please let me know.
On Fri, Oct 31, 2014 at 6:38 AM, Rob Godfrey <[email protected]> wrote: > Hi Helen, > > so the fundamental issue here is that currently inside the broker queues > "push" messages to consumers rather than consumers pulling from queues. > When a queue is informed that a consumer has room to accept more messages, > it immediately tries to start pump messages to that consumer. In this case > there are two queues trying to pump messages to one consumer, and the first > one to get notified will pump in up to 80 messages before yielding the > thread. At this point the second queue might be able to jump in and pump > 80 messages, or the first queue may actually get the lock again. > > Keith and I are planning on reworking the underlying queue threading model > soon to change this around so that the consumers/connections pull from the > queues, at which point real fairness will be easier to implement. In the > meantime I've made a couple of changes (in QPID-6204, revision > https://svn.apache.org/r1635768) which help by a) allowing the number of > messages delivered in one "time-slice" to be configured on a per queue > basis (i.e. removing the hardcoding of 80) and b) alternating which of the > queues is notified first when the consumer has available credit. > > After this change, by setting the time slice to one delivery > (-Dqueue.maxAsynchronousDeliveries=1) I saw reasonably fair behaviour (runs > of no more the 3 messages for the same queue). Note that reducing the > timeslice probably has some negative performance impact, but you can > configure this value on a per queue basis (rather than setting it as a > system property you can set on each individual queue as a context > variable). > > When Keith gets done with what he's currently working on we'll try to > update you on our work on changing the queue threading model around. > > Cheers, > Rob > > > On 31 October 2014 01:19, Helen Kwong <[email protected]> wrote: > > > Hi Rob, > > > > I got around to doing some testing on the multi-queue consumer feature > you > > added. So far things have looked good mostly, but there is one issue I've > > run into and would like your help on. > > > > When we had single-queue consumers, we had fair allocation behavior > across > > queues, in the sense that if I have 2 queues A and B, each with 100 > > messages, and one JMS session having a listening consumer on queue A and > a > > listener on queue B, the message processing order will be round robin -- > > i.e., M_A_1 (representing the first message on queue A), M_B_1, M_A_2, > > M_B_2, M_A_3, M_B_3, and so on. But now, if I run the same test with the > > session having a single multi-queue consumer on A and B instead, the > order > > is, roughly, first the 100 messages on A, followed by the messages on B > > (only a few B messages are processed before all A messages are done). I > > enqueue the messages in the round robin order. I've also tried this with > > synchronous receives from both queues instead of asynchronous listening, > > and I see similar behavior. > > > > Is there any way we can mimic the "fair" behavior of single-queue > consumers > > with multi-queue consumers? > > > > Thanks, > > Helen > > >
