Hi Rob, not sure if you saw my email earlier; would appreciate your help on
this. Thank you!

On Mon, Jan 5, 2015 at 5:12 PM, Helen Kwong <[email protected]> wrote:

> Hi Rob,
>
> I finally got back to testing multiple queues on a consumer again, using
> the changes you added to help with fairness. My broker jvm is running with
> -Dqueue.maxAsynchronousDeliveries=1 for the number of messages delivered
> per time slice by a queue. It has made things more fair than before, though
> still significantly less so than before with single-queue consumers. Two
> fairness tests that we ran:
>
> A) In the test I described before, where a single session listens to 2
> queues starting with 100 messages each, it is now indeed more fair.
> Sometimes there are 3 messages in a row from 1 queue, but the order is
> mostly alternating. When the first queue is drained, on average the other /
> slower queue still has 6-7 messages left. However, this effect can
> accumulate and the number at the end can vary a lot -- I've seen up to 26,
> which seems quite high compared to 100. Also ran this starting with 1000
> messages per queue: on average the slower queue had about 22 messages left
> at the end, and I've seen up to 65 messages left.
>
> B) We also ran tests with multiple connections / consumers listening to
> the same set of queues. More specifically, we have 10 connections, each
> with 1 consumer listening to 100 queues, each queue starting with 500
> messages. We look at the order in which messages are processed, and track
> what the largest difference in the number of remaining messages between any
> 2 queues at any given point is. Before, in the analogous test where each
> connection has 100 single-queue consumers for the 100 queues, the largest
> difference at any point is around 20. Now with multi-queue consumers, it's
> much higher, around 140-170.
>
> Our questions / concerns:
>
> 1. One thing I'm not sure about is whether it's possible that 1 queue can
> be favored over another queue. With this 1 consumer / 2 queues test, I use
> the same 2 queues for every test run, one called BAREBONEQ0 and the other
> BAREBONEQ1. What I saw intially when running the test was that BAREBONEQ0
> was almost always the slower queue, over maybe 20 or so test runs. But
> later I ran more test runs, and I don't see a pattern to who finishes last
> anymore. It could be just by chance that I saw this in the beginning, but
> just to be sure I'm not missing anything, can one queue end up being
> favored over another at all?
>
> 2. With the second test, I'd often see a chunk of messages from the same
> queue given to different consumers around the same time. E.g., might see 10
> messages from one queue being processed consecutively, by the 10 different
> consumers. 5 to 7 messages in a row from the same queue is common. Why does
> this happen even though I have maxAsynchronousDeliveries configured to 1?
> You mentioned a queue pushes messages to a consumer when it's notified that
> the consumer has room for messages -- what does this mean in terms of
> multiple consumers/connections, and does this mean whenever a queue gets
> notified, it may push 1 message to each consumer?
>
> 3. Overall does the behavior from these tests seem expected to you? Is
> this as fair as we can be with multi-queue consumers, before the new
> threading model allows consumers to pull from queues?
>
> Thanks a lot!
> Helen
>
> On Fri, Oct 31, 2014 at 11:18 AM, Helen Kwong <[email protected]>
> wrote:
>
>> Thanks a lot for the changes and the explanation! Will try this
>> workaround for now. When the new queue threading model is available please
>> let me know.
>>
>> On Fri, Oct 31, 2014 at 6:38 AM, Rob Godfrey <[email protected]>
>> wrote:
>>
>>> Hi Helen,
>>>
>>> so the fundamental issue here is that currently inside the broker queues
>>> "push" messages to consumers rather than consumers pulling from queues.
>>> When a queue is informed that a consumer has room to accept more
>>> messages,
>>> it immediately tries to start pump messages to that consumer.  In this
>>> case
>>> there are two queues trying to pump messages to one consumer, and the
>>> first
>>> one to get notified will pump in up to 80 messages before yielding the
>>> thread.  At this point the second queue might be able to jump in and pump
>>> 80 messages, or the first queue may actually get the lock again.
>>>
>>> Keith and I are planning on reworking the underlying queue threading
>>> model
>>> soon to change this around so that the consumers/connections pull from
>>> the
>>> queues, at which point real fairness will be easier to implement.  In the
>>> meantime I've made a couple of changes (in QPID-6204, revision
>>> https://svn.apache.org/r1635768) which help by a) allowing the number of
>>> messages delivered in one "time-slice" to be configured on a per queue
>>> basis (i.e. removing the hardcoding of 80) and b) alternating which of
>>> the
>>> queues is notified first when the consumer has available credit.
>>>
>>> After this change, by setting the time slice to one delivery
>>> (-Dqueue.maxAsynchronousDeliveries=1) I saw reasonably fair behaviour
>>> (runs
>>> of no more the 3 messages for the same queue).  Note that reducing the
>>> timeslice probably has some negative performance impact, but you can
>>> configure this value on a per queue basis (rather than setting it as a
>>> system property you can set on each individual queue as a context
>>> variable).
>>>
>>> When Keith gets done with what he's currently working on we'll try to
>>> update you on our work on changing the queue threading model around.
>>>
>>> Cheers,
>>> Rob
>>>
>>>
>>> On 31 October 2014 01:19, Helen Kwong <[email protected]> wrote:
>>>
>>> > Hi Rob,
>>> >
>>> > I got around to doing some testing on the multi-queue consumer feature
>>> you
>>> > added. So far things have looked good mostly, but there is one issue
>>> I've
>>> > run into and would like your help on.
>>> >
>>> > When we had single-queue consumers, we had fair allocation behavior
>>> across
>>> > queues, in the sense that if I have 2 queues A and B, each with 100
>>> > messages, and one JMS session having a listening consumer on queue A
>>> and a
>>> > listener on queue B, the message processing order will be round robin
>>> --
>>> > i.e., M_A_1 (representing the first message on queue A), M_B_1, M_A_2,
>>> > M_B_2, M_A_3, M_B_3, and so on. But now, if I run the same test with
>>> the
>>> > session having a single multi-queue consumer on A and B instead, the
>>> order
>>> > is, roughly, first the 100 messages on A, followed by the messages on B
>>> > (only a few B messages are processed before all A messages are done). I
>>> > enqueue the messages in the round robin order. I've also tried this
>>> with
>>> > synchronous receives from both queues instead of asynchronous
>>> listening,
>>> > and I see similar behavior.
>>> >
>>> > Is there any way we can mimic the "fair" behavior of single-queue
>>> consumers
>>> > with multi-queue consumers?
>>> >
>>> > Thanks,
>>> > Helen
>>> >
>>>
>>
>>
>

Reply via email to