On Wed, May 19, 2010 at 10:32 AM, Martin Ritchie <[email protected]> wrote:
> On 19 May 2010 14:36, Rajith Attapattu <[email protected]> wrote:
>> On Wed, May 19, 2010 at 9:16 AM, Martin Ritchie <[email protected]> wrote:
>>> On 18 May 2010 23:08, Rajith Attapattu <[email protected]> wrote:
>>>> In the JMS client we use the prefetch value in determining two
>>>> important heuristics in addition to using it for flow control.
>>>>
>>>> 1. We use the prefetch value to determine the batch size for message acks.
>>>>    Ex. if ack mode is auto-ack or dups-ok-ack and if unackedCount >=
>>>> prefetch/2 then we flush the acks.
>>>>
>>>>    The above is done to ensure that the credits doesn't dry up while
>>>> still not incurring a penalty due to frequent acking.
>>>
>>> This seems wrong to me. I'm guessing this is only done on the 0-10
>>> code path as AUTO_ACK should ack for every message. A well behaved
>>> client in AUTO_ACK mode should not expect to receive redelivered
>>> messages if it crashes before it gets to prefech/2. That sounds more
>>> like correct behaviour for DUPS_OK.
>>
>> For folks you want the correct behaviour for AUTO_ACK can use
>> -Dqpid.sync_ack=true or sync_ack=true in the connection URL.
>> That will ensure we ack after every message.
>
> Really? The correct behaviour for AUTO_ACK is not the default!
> That is surely

Setting sync-ack=true degrades the performance by an order of magnitude :)

>>> The number of acks we send back are dependent on the protocol support
>>> and the ack mode in use.
>>> AUTO_ACK needs to ack every message
>>> DUPS_OK doesn't need to ack everyone so can use batching if supported.
>>> This isn't always possible if another cilent on the same session has
>>> not consumed its messages. At least on 0-8/9/91 as Ack Ranges are not
>>> available only ALL to this point.
>>> CLIENT_ACK and Transacted are very similar. They need only send the
>>> acks when acknowledge() or commit() is called. Transacted has the
>>> added ability to send the acks in the DUPS_OK style to reduce the
>>> acking burden at the commit() barrier. Either way credit should not be
>>> given until the acknowledge() or commit() is performed.
>>> Sure this leads to starvation issues but that is the point of prefetch 
>>> limits.
>>>
>>>> 2. In transacted mode we use the prefetch value to determine if we
>>>> need to send completions to ensure that credits don't dry up.
>>>>   Or else applications could potentially endup in a deadlock if
>>>> prefetch was incorrectly configured.
>>>>   Ex. Prefetch is set at 10, but the application logic maybe waiting
>>>> for some control message to commit, but that message may not be in the
>>>> first 10 messages.
>>>
>>> Perhaps my 0-10 spec is rusty but are control messages also flow
>>> controlled using prefetch?
>>> This doesn't seem right. Prefetch should just be the number of client
>>> messages excluding any AMQP control.
>> Sorry for not being clear. I wasn't referring to any AMQP control message.
>> I was talking about an "application level" control message.
>> Ex  when sending a large order, the application may send 4 JMS
>> messages that contains the order and the 5th JMS message may contain
>> something in the body that says "end-of-order".
>
> That is an application error though and not something we should be
> trying to solve. They may have forgotten to send the "end-of-order"
> message. Does that mean we have to keep expanding our prefetch window
> to allow their app to work?

Good point.
But I think my original example was bad, hence I sent you down the
wrong direction.
I apologize for that.

Here is a better use case
Consider a prefetch of 1 (which is the same as no prefetch)
In that case we **really** need to ack unless we are doing
transactions with size = 1.
Or else we will just wait indefinitely for messages to arrive.

So I think this heuristic is really there for, when your transaction
size > prefetch.
And that is a very valid use case, the most prominent being the case
of no-prefetch.

Hope this explains it better.

>>> If the client has misconfigured their app and drys up sitting on a
>>> receive() then that is correct behaviour. The broker needs to protect
>>> itself from a client asking it to hold on to a large transaction set.
>>
>> Currently both brokers will not be able to protect itself from large
>> tx sets as it allow us to set any value for flow control. So even if
>> we don't send completions in the middle of the transaction it could
>> still happen, if the prefetch is sufficiently large.
>> So if the client racks up a large transaction set, the broker is
>> unable to do anything, unless we implement some broker limit, just
>> like we have queue limits etc..
>
> Sure you can always configure your application to break the broker. We
> just don't make the broker to say "no you can't have that prefetch
> size". Which really we should. However sending completions in the
> middle of the transaction means you can consume MORE then the
> prefetch. Does the completion just restore credit. I assume that the
> ack isn't actually processed and still remains part of the
> transaction. i.e. if the app falls over after the completion those
> messages will come back.
>
>>>> However the above situation becomes a bit complicated when we
>>>> introduce per destination flow control (QPID-2515) as the above are
>>>> determined at session level.
>>>> (Perhaps the proper solution here is per session flow control, instead
>>>> of per destination ? )
>>>
>>> Isn't per session what we have now? The 0-8/9/91 Java broker operates
>>> prefetch on a per JMS Session basis.
>>
>> Well not in 0-10.
>> Flow control is set on per subscription basis (isn't it per destination?).
>> So if a given session has 2 consumers and the max-prefetch=10.
>> Then the session will have 20 messages.
>
> That makes sense and would be very easy to add to the 0-8/9/91 code
> path. It is rather unfortunate that we used the same property as one
> pertains to the session and one to a subscriber. Perhaps we can make
> two new ones.
> max-session-prefetch and max-subscription-prefetch so users know
> exactly what they are setting. And migration from 0-8/9/91 to 0-10
> becomes easier. At least the default doesn't mean the app will get
> less messages. :)
>
> Martin
>
>>>> Therefore when capacity is specified at the destination level (which
>>>> overrides the connection default) the above calculations may cause
>>>> undesired behaviour.
>>>> Possible solutions that comes to mind,
>>>>
>>>> 1. We allow these two heuristics to be explicitly configured.  Ex
>>>> qpid.ack-batch-size and qpid.tx-ack-batch-size.
>>>>    Pros: This may allow the application developer/admin to tune the
>>>> client based on the applications behaviour.
>>>>    Cons: If the application deviates from the predicted behaviour
>>>> this could affect performance or worse.
>>>>
>>>> 2.  We use some sort of mathematical formula that will use the
>>>> capacity specified in the destinations of the consumers attached to
>>>> that session, to determine the batch sizes.
>>>>     Pros: Puts less burden on the end users
>>>>     Cons: Creating such a formula may not be straightforward.
>>>>
>>>> Your comments and suggestions are most appreciated !
>>>
>>> Limiting the consumer on a per destination basis would work but I
>>> don't that this would impact flow control. Speaking from a 0-8/9/91
>>> Java broker pov a consuming session is only flow controlled when using
>>> NO_ACK mode and it is the client that tells the broker to stop
>>> sending. The Producer Flow Control also uses flow control to stop
>>> publication from the client but this is on the producing side of the
>>> session.
>>>
>>> I would have thought that a prefetch limit per destination per
>>> consumer would be controlled by the broker and the configuration it
>>> was given when the consumer started. This is how the Java broker
>>> currently performs prefetch. Recording the number of messages sent per
>>> session and stopping when credit is exhausted. There is no flow
>>> control frames sent to indicate this, the broker just stops.
>>
>> In 0-10 the credits are per subscription, not per session.
>>
>>> I don't think we need complex heuristics I'd just go with a our
>>> current prefetch applying to the session and let developers configure
>>> a per destination per consumer limit in addition. Arguably we should
>>> through an exception if we try and create consumers whose destination
>>> prefetch totals more than their underlying session.
>>>
>>> Thoughts
>>>
>>> Martin
>>>
>>>> Regards,
>>>>
>>>> Rajith Attapattu
>>>> Red Hat
>>>> http://rajith.2rlabs.com/
>>>>
>>>> ---------------------------------------------------------------------
>>>> Apache Qpid - AMQP Messaging Implementation
>>>> Project:      http://qpid.apache.org
>>>> Use/Interact: mailto:[email protected]
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Martin Ritchie
>>>
>>> ---------------------------------------------------------------------
>>> Apache Qpid - AMQP Messaging Implementation
>>> Project:      http://qpid.apache.org
>>> Use/Interact: mailto:[email protected]
>>>
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Rajith Attapattu
>> Red Hat
>> http://rajith.2rlabs.com/
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:[email protected]
>>
>>
>
>
>
> --
> Martin Ritchie
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]
>
>



-- 
Regards,

Rajith Attapattu
Red Hat
http://rajith.2rlabs.com/

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to