It doesnt really affect that case at all I think. It only affects
transacted sessions and although it does slightly move the location of
a call also made for synchronous consumers, the effect of the change
only really makes any difference to asynchronous consumers.

Robbie

On 31 October 2011 15:19, Rajith Attapattu <[email protected]> wrote:
> On Mon, Oct 31, 2011 at 9:18 AM, Robbie Gemmell
> <[email protected]> wrote:
>> Hi all,
>>
>> Over the weekend I made a change to the 0-10 Java client so that using
>> prefetch=1 with transacted sessions and an OnMessage() listener would
>> result in the client only getting 1 message at a time, by moving the
>> sending of command completions (if necessary) which would prompt the
>> next message into postDeliver() instead of doing it before delivery.
>> This was in response to a user on the mailing lists trying to
>> configure the client such that when an extra long period of processing
>> occurred in his listener, other clients would be able to pick up the
>> rest of the messages on the queue. However, whilst this behaviour is
>> what the 0-8 client does in this case, after making the change I
>> decided this is really prefetch=0 (which is something the 0-8 client
>> doesn't support) behaviour for the 0-10 client.
>>
>> The same user had it turns out also tried prefetch=0 and found it to
>> behave rather oddly, almost like prefetch=2. Deciding that I should
>> perhaps instead revert the change I made and look at fixing
>> prefetch=0, I had a quick play with it to see what it did and found it
>> to behave similarly to his observations. Looking at the code, prefetch
>> of 0 appears to be horribly broken for an asynchronous message
>> listener in its current form, and will indeed act more like
>> prefetch=2. It appears that message credit is sent when the connection
>> is started, and when a message listener is set (seemingly regardless
>> of whether the connection is started, which would itself be a bug if
>> true), meaning more than 1 credit can be issued for something that
>> should have at most 1 message at a time. Regardless of this,
>> prefetch=0 wont work for a message listener because the client also
>> then sends a credit to provoke the next delivery just *before* passing
>> of the message to the application instead of afterwards like it
>> should.
>>
>> The change I made previously was very small (same code, called in a
>> different place) and doesn't really make any difference to synchronous
>> consumers because the completions are still sent before the
>> application gets the message. It only really affects asynchronous
>> transacted consumers with a prefetch of 1 configured, and gives them
>> the ability to achieve the desired behaviour, but at the expense of
>> effectively being out-by-1 on the count. The changes to get prefetch=0
>> working for such cases could be larger, and I probably wont get time
>> to look at doing that for this release either way.
>>
>> So the question is, keep the current change in to provide a means of
>> users getting the client to do what they want right now, or revert it
>> and fix prefetch=0 later? I'd say keep it, what do the rest of you
>> think?
>
> This is infact an issue that we had identified (in the context of the
> JCA client) and was aiming to fix soon after the release.
> I'd argue that the logic around credits are broken in general - QPID-2604.
> I haven't really looked at your change closely. Not sure how it
> affects the above case.
>
>> Robbie
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:[email protected]
>>
>>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]
>
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to