Hi Robbie,

I was testing against trunk, and also, I was calling commit after my
simulated processing delay, yes.

Thanks,
Praveen

On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <[email protected]>wrote:

> Just to be clear for when I look at it...were you using trunk or 0.12
> for those tests, and presumably you were calling commit after your
> simulated processing delay?
>
> Robbie
>
> On 28 October 2011 00:28, Praveen M <[email protected]> wrote:
> > Hi Robbie,
> >
> > I was using asynchronous onMessage delivery with transacted session for
> my
> > tests.
> >
> > So from your email, I'm afraid it might be an issue. It will be great if
> you
> > could investigate a little on this and keep us update.
> >
> > Thanks a lot,
> > Praveen
> >
> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> > <[email protected]>wrote:
> >
> >> From the below, would I be right in thinking you were using receive()
> >> calls with an AutoAck session? If so then you would see the behaviour
> >> you observed as the message gets acked just before receive() returns,
> >> which makes the broker send the next one to the client. That shouldnt
> >> happen if you were using asynchronous onMessage delivery (since the
> >> ack gets since when the onMessage() handler returns), or if you you
> >> used a ClientAck or Transacted session in which you only acknowledged
> >> the message / commited the session after the processing is complete.
> >>
> >> I must admit to having never used the client with prefetch set to 0,
> >> which should in theory give you what you are looking for even with
> >> AutoAck but based on your comments appears not to have. I will try and
> >> take a look into that at the weekend to see if there are any obvious
> >> issues we can JIRA for fixing.
> >>
> >> Robbie
> >>
> >> On 26 October 2011 23:48, Praveen M <[email protected]> wrote:
> >> > Hi Jakub,
> >> >
> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
> >> test
> >> > and now ran into another issue.
> >> >
> >> > I set the prefetch to 1 and ran the same test described in my earlier
> >> mail.
> >> >
> >> > In this case the behavior I see is,
> >> > The 1st consumer gets the 1st message and works on it for a while, the
> >> 2nd
> >> > consumer consumes 8 messages and then does nothing(even though there
> was
> >> 1
> >> > more unconsumed message). When the first consumer completed its long
> >> running
> >> > message it got around and consumed the remaining 1 message. However,
>  I
> >> was
> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> >> remaining
> >> > messages) while the 1st consumer was busy working on the long message.
> >> >
> >> > Then, I thought, perhaps the prefetch count meant that, when a
> consumer
> >> is
> >> > working on a message, another message in the queue is prefetched to
> the
> >> > consumer from the persistant store as my prefetch count is 1. That
> could
> >> > explain why I saw the behavior as above.
> >> >
> >> > What i wanted to achieve was to actually turn of any kinda prefetching
> >> > (Yeah, I'm ok with taking the throughput hit)
> >> >
> >> > So I re ran my test now with prefetch = 0, and saw a really weird
> result.
> >> >
> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
> >> for a
> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
> does
> >> > nothing(even though there were 2 more unconsumed messages). When the
> 1st
> >> > consumer completed processing it's message it got to consume the
> >> remaining
> >> > two messages too. (Did it kinda prefetch 2?)
> >> >
> >> > Can someone please tell me if Is this a bug or am I doing something
> >> > completely wrong? I'm using the latest Java Broker & client (from
> trunk)
> >> > with DerbyMessageStore for my tests.
> >> >
> >> > Also, can someone please tell me what'd be the best way to turn off
> >> > prefetching?
> >> >
> >> > Thanks a lot,
> >> > Praveen
> >> >
> >> >
> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <[email protected]>
> wrote:
> >> >
> >> >> Hi Praveen,
> >> >>
> >> >> Have you set the capacity / prefetch for the receivers to one
> message?
> >> >> I believe the capacity defines how many messages can be "buffered" by
> >> >> the client API in background while you are still processing the first
> >> >> message. That may cause that both your clients receive 5 messages,
> >> >> even when the processing in the first client takes a longer time.
> >> >>
> >> >> Regards
> >> >> Jakub
> >> >>
> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <[email protected]>
> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I ran the following test
> >> >> >
> >> >> > 1) I created 1 Queue
> >> >> > 2) Registered 2 consumers to the queue
> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message
> is
> >> >> long
> >> >> > running. I simulated such that the first message on consumption
> takes
> >> >> about
> >> >> > 50 seconds to be processed]
> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
> message.
> >> >> > 5) The 1st consumer that got the long running message works on it
> for
> >> a
> >> >> long
> >> >> > time while the second consumer that got the second message keeps
> >> >> processing
> >> >> > and going to the next message, but  only goes as far until it
> >> processes 5
> >> >> of
> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
> processing.
> >> >> > 6) When the 1st consumer with the  long running message completes,
> it
> >> >> then
> >> >> > ends up processing the remaining messages and my test completes.
> >> >> >
> >> >> > So it seems like the two consumers were trying to take a fair share
> of
> >> >> > messages that they were processing immaterial of the time it takes
> to
> >> >> > process individual messages. Enqueued message = 10, Consumer 1
> share
> >> of 5
> >> >> > messages were processed by it, and Consumer 2's share of 5 messages
> >> were
> >> >> > processed by it.
> >> >> >
> >> >> >
> >> >> > This is kinda against the behavior that I'd like to see. The
> desired
> >> >> > behavior in my case is that of each consumer keeps going on if it's
> >> done
> >> >> and
> >> >> > has other messages to process.
> >> >> >
> >> >> > In the above test, I'd expect as consumer 1 is working on the long
> >> >> message,
> >> >> > the second consumer should work its way through all the remaining
> >> >> messages.
> >> >> >
> >> >> > Is there some config that I'm missing that could cause this
> effect??
> >> Any
> >> >> > advice on tackling this will be great.
> >> >> >
> >> >> > Also, Can someone please explain in what order are messages
> delivered
> >> to
> >> >> the
> >> >> > consumers in the following cases?
> >> >> >
> >> >> > Case 1)
> >> >> >  There is a single Queue with more than 1 message in it and
> multiple
> >> >> > consumers registered to it.
> >> >> >
> >> >> > Case 2)
> >> >> > There are multiple queues each with more than 1 message in it, and
> has
> >> >> > multiple consumers registered to it.
> >> >> >
> >> >> >
> >> >> >
> >> >> > Thank you,
> >> >> > --
> >> >> > -Praveen
> >> >> >
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> Apache Qpid - AMQP Messaging Implementation
> >> >> Project:      http://qpid.apache.org
> >> >> Use/Interact: mailto:[email protected]
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > -Praveen
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> Apache Qpid - AMQP Messaging Implementation
> >> Project:      http://qpid.apache.org
> >> Use/Interact: mailto:[email protected]
> >>
> >>
> >
> >
> > --
> > -Praveen
> >
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]
>
>


-- 
-Praveen

Reply via email to