Re: Qpid Java Broker performance lower than expected

2011-10-27 Thread vipun
The parameters we had used with the original QpidBench program is as follows:

-c 1 -i 1000 -s 1024 -m both --timestamp false --message-id false
--message-cache true --persistent true --jms true

--
View this message in context: 
http://apache-qpid-users.2158936.n2.nabble.com/Qpid-Java-Broker-performance-lower-than-expected-tp6925405p6939008.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org



Re: Qpid Java Broker performance lower than expected

2011-10-27 Thread vipun
Hi Robbie,
  Thanks for the reply. We also didn't see much performance difference
between broker instances based on Derby store and berkeley db store. However
readings we obtained now are quite ok, here's the summary of readings we
have with Derby persistence:
We have also tinkered a little bit with the program Qpid Bench to allow for
multiple producers and consumers, and here's our reading.

producers   consumers   Msg_size   producers_rateconsumers_rate
threads  threads
1  1   1k921.74   877.73

1010  1k4357.87 7132.64

5050  1k5511.71 12670.5

4070  1k5753.82 24654.24

5060  1k6319.43 21525.96

Regards
Vinay

--
View this message in context: 
http://apache-qpid-users.2158936.n2.nabble.com/Qpid-Java-Broker-performance-lower-than-expected-tp6925405p6938973.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org



Re: 1 Queue with 2 Consumers - turn off pre-fetching?

2011-10-27 Thread Robbie Gemmell
Ok, I havent actually tried this yet, but after sneaking a look at the
code I am pretty sure I see a problem in the client specific to
transacted AMQP 0-10 sessions with prefetch=1 that would cause the
behaviour you are seeing. I'll look into it at the weekend. Time for
sleep, before 3am comes along ;)

Robbie

On 28 October 2011 01:18, Praveen M  wrote:
> Hi Robbie,
>
> I was testing against trunk, and also, I was calling commit after my
> simulated processing delay, yes.
>
> Thanks,
> Praveen
>
> On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell 
> wrote:
>
>> Just to be clear for when I look at it...were you using trunk or 0.12
>> for those tests, and presumably you were calling commit after your
>> simulated processing delay?
>>
>> Robbie
>>
>> On 28 October 2011 00:28, Praveen M  wrote:
>> > Hi Robbie,
>> >
>> > I was using asynchronous onMessage delivery with transacted session for
>> my
>> > tests.
>> >
>> > So from your email, I'm afraid it might be an issue. It will be great if
>> you
>> > could investigate a little on this and keep us update.
>> >
>> > Thanks a lot,
>> > Praveen
>> >
>> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
>> > wrote:
>> >
>> >> From the below, would I be right in thinking you were using receive()
>> >> calls with an AutoAck session? If so then you would see the behaviour
>> >> you observed as the message gets acked just before receive() returns,
>> >> which makes the broker send the next one to the client. That shouldnt
>> >> happen if you were using asynchronous onMessage delivery (since the
>> >> ack gets since when the onMessage() handler returns), or if you you
>> >> used a ClientAck or Transacted session in which you only acknowledged
>> >> the message / commited the session after the processing is complete.
>> >>
>> >> I must admit to having never used the client with prefetch set to 0,
>> >> which should in theory give you what you are looking for even with
>> >> AutoAck but based on your comments appears not to have. I will try and
>> >> take a look into that at the weekend to see if there are any obvious
>> >> issues we can JIRA for fixing.
>> >>
>> >> Robbie
>> >>
>> >> On 26 October 2011 23:48, Praveen M  wrote:
>> >> > Hi Jakub,
>> >> >
>> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
>> >> test
>> >> > and now ran into another issue.
>> >> >
>> >> > I set the prefetch to 1 and ran the same test described in my earlier
>> >> mail.
>> >> >
>> >> > In this case the behavior I see is,
>> >> > The 1st consumer gets the 1st message and works on it for a while, the
>> >> 2nd
>> >> > consumer consumes 8 messages and then does nothing(even though there
>> was
>> >> 1
>> >> > more unconsumed message). When the first consumer completed its long
>> >> running
>> >> > message it got around and consumed the remaining 1 message. However,
>>  I
>> >> was
>> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
>> >> remaining
>> >> > messages) while the 1st consumer was busy working on the long message.
>> >> >
>> >> > Then, I thought, perhaps the prefetch count meant that, when a
>> consumer
>> >> is
>> >> > working on a message, another message in the queue is prefetched to
>> the
>> >> > consumer from the persistant store as my prefetch count is 1. That
>> could
>> >> > explain why I saw the behavior as above.
>> >> >
>> >> > What i wanted to achieve was to actually turn of any kinda prefetching
>> >> > (Yeah, I'm ok with taking the throughput hit)
>> >> >
>> >> > So I re ran my test now with prefetch = 0, and saw a really weird
>> result.
>> >> >
>> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
>> >> for a
>> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
>> does
>> >> > nothing(even though there were 2 more unconsumed messages). When the
>> 1st
>> >> > consumer completed processing it's message it got to consume the
>> >> remaining
>> >> > two messages too. (Did it kinda prefetch 2?)
>> >> >
>> >> > Can someone please tell me if Is this a bug or am I doing something
>> >> > completely wrong? I'm using the latest Java Broker & client (from
>> trunk)
>> >> > with DerbyMessageStore for my tests.
>> >> >
>> >> > Also, can someone please tell me what'd be the best way to turn off
>> >> > prefetching?
>> >> >
>> >> > Thanks a lot,
>> >> > Praveen
>> >> >
>> >> >
>> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz 
>> wrote:
>> >> >
>> >> >> Hi Praveen,
>> >> >>
>> >> >> Have you set the capacity / prefetch for the receivers to one
>> message?
>> >> >> I believe the capacity defines how many messages can be "buffered" by
>> >> >> the client API in background while you are still processing the first
>> >> >> message. That may cause that both your clients receive 5 messages,
>> >> >> even when the processing in the first client takes a longer time.
>> >> >>
>> >> >> Regards
>> >> >> Jakub
>> >> >>
>> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M 
>> >> wrote:
>> >> >> > 

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

2011-10-27 Thread Praveen M
Hi Robbie,

I was testing against trunk, and also, I was calling commit after my
simulated processing delay, yes.

Thanks,
Praveen

On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell wrote:

> Just to be clear for when I look at it...were you using trunk or 0.12
> for those tests, and presumably you were calling commit after your
> simulated processing delay?
>
> Robbie
>
> On 28 October 2011 00:28, Praveen M  wrote:
> > Hi Robbie,
> >
> > I was using asynchronous onMessage delivery with transacted session for
> my
> > tests.
> >
> > So from your email, I'm afraid it might be an issue. It will be great if
> you
> > could investigate a little on this and keep us update.
> >
> > Thanks a lot,
> > Praveen
> >
> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> > wrote:
> >
> >> From the below, would I be right in thinking you were using receive()
> >> calls with an AutoAck session? If so then you would see the behaviour
> >> you observed as the message gets acked just before receive() returns,
> >> which makes the broker send the next one to the client. That shouldnt
> >> happen if you were using asynchronous onMessage delivery (since the
> >> ack gets since when the onMessage() handler returns), or if you you
> >> used a ClientAck or Transacted session in which you only acknowledged
> >> the message / commited the session after the processing is complete.
> >>
> >> I must admit to having never used the client with prefetch set to 0,
> >> which should in theory give you what you are looking for even with
> >> AutoAck but based on your comments appears not to have. I will try and
> >> take a look into that at the weekend to see if there are any obvious
> >> issues we can JIRA for fixing.
> >>
> >> Robbie
> >>
> >> On 26 October 2011 23:48, Praveen M  wrote:
> >> > Hi Jakub,
> >> >
> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
> >> test
> >> > and now ran into another issue.
> >> >
> >> > I set the prefetch to 1 and ran the same test described in my earlier
> >> mail.
> >> >
> >> > In this case the behavior I see is,
> >> > The 1st consumer gets the 1st message and works on it for a while, the
> >> 2nd
> >> > consumer consumes 8 messages and then does nothing(even though there
> was
> >> 1
> >> > more unconsumed message). When the first consumer completed its long
> >> running
> >> > message it got around and consumed the remaining 1 message. However,
>  I
> >> was
> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> >> remaining
> >> > messages) while the 1st consumer was busy working on the long message.
> >> >
> >> > Then, I thought, perhaps the prefetch count meant that, when a
> consumer
> >> is
> >> > working on a message, another message in the queue is prefetched to
> the
> >> > consumer from the persistant store as my prefetch count is 1. That
> could
> >> > explain why I saw the behavior as above.
> >> >
> >> > What i wanted to achieve was to actually turn of any kinda prefetching
> >> > (Yeah, I'm ok with taking the throughput hit)
> >> >
> >> > So I re ran my test now with prefetch = 0, and saw a really weird
> result.
> >> >
> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
> >> for a
> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
> does
> >> > nothing(even though there were 2 more unconsumed messages). When the
> 1st
> >> > consumer completed processing it's message it got to consume the
> >> remaining
> >> > two messages too. (Did it kinda prefetch 2?)
> >> >
> >> > Can someone please tell me if Is this a bug or am I doing something
> >> > completely wrong? I'm using the latest Java Broker & client (from
> trunk)
> >> > with DerbyMessageStore for my tests.
> >> >
> >> > Also, can someone please tell me what'd be the best way to turn off
> >> > prefetching?
> >> >
> >> > Thanks a lot,
> >> > Praveen
> >> >
> >> >
> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz 
> wrote:
> >> >
> >> >> Hi Praveen,
> >> >>
> >> >> Have you set the capacity / prefetch for the receivers to one
> message?
> >> >> I believe the capacity defines how many messages can be "buffered" by
> >> >> the client API in background while you are still processing the first
> >> >> message. That may cause that both your clients receive 5 messages,
> >> >> even when the processing in the first client takes a longer time.
> >> >>
> >> >> Regards
> >> >> Jakub
> >> >>
> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M 
> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I ran the following test
> >> >> >
> >> >> > 1) I created 1 Queue
> >> >> > 2) Registered 2 consumers to the queue
> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message
> is
> >> >> long
> >> >> > running. I simulated such that the first message on consumption
> takes
> >> >> about
> >> >> > 50 seconds to be processed]
> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
> message.
> >> >> > 5) The 1st consumer that got the lo

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

2011-10-27 Thread Robbie Gemmell
Just to be clear for when I look at it...were you using trunk or 0.12
for those tests, and presumably you were calling commit after your
simulated processing delay?

Robbie

On 28 October 2011 00:28, Praveen M  wrote:
> Hi Robbie,
>
> I was using asynchronous onMessage delivery with transacted session for my
> tests.
>
> So from your email, I'm afraid it might be an issue. It will be great if you
> could investigate a little on this and keep us update.
>
> Thanks a lot,
> Praveen
>
> On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> wrote:
>
>> From the below, would I be right in thinking you were using receive()
>> calls with an AutoAck session? If so then you would see the behaviour
>> you observed as the message gets acked just before receive() returns,
>> which makes the broker send the next one to the client. That shouldnt
>> happen if you were using asynchronous onMessage delivery (since the
>> ack gets since when the onMessage() handler returns), or if you you
>> used a ClientAck or Transacted session in which you only acknowledged
>> the message / commited the session after the processing is complete.
>>
>> I must admit to having never used the client with prefetch set to 0,
>> which should in theory give you what you are looking for even with
>> AutoAck but based on your comments appears not to have. I will try and
>> take a look into that at the weekend to see if there are any obvious
>> issues we can JIRA for fixing.
>>
>> Robbie
>>
>> On 26 October 2011 23:48, Praveen M  wrote:
>> > Hi Jakub,
>> >
>> > Thanks for your reply. Yes I did find the prefetch model and reran my
>> test
>> > and now ran into another issue.
>> >
>> > I set the prefetch to 1 and ran the same test described in my earlier
>> mail.
>> >
>> > In this case the behavior I see is,
>> > The 1st consumer gets the 1st message and works on it for a while, the
>> 2nd
>> > consumer consumes 8 messages and then does nothing(even though there was
>> 1
>> > more unconsumed message). When the first consumer completed its long
>> running
>> > message it got around and consumed the remaining 1 message. However,  I
>> was
>> > expecting the 2nd consumer to dequeue all 9 messages(the number of
>> remaining
>> > messages) while the 1st consumer was busy working on the long message.
>> >
>> > Then, I thought, perhaps the prefetch count meant that, when a consumer
>> is
>> > working on a message, another message in the queue is prefetched to the
>> > consumer from the persistant store as my prefetch count is 1. That could
>> > explain why I saw the behavior as above.
>> >
>> > What i wanted to achieve was to actually turn of any kinda prefetching
>> > (Yeah, I'm ok with taking the throughput hit)
>> >
>> > So I re ran my test now with prefetch = 0, and saw a really weird result.
>> >
>> > With prefetch 0, the 1st consumer gets the 1st message and works on it
>> for a
>> > while, which the 2nd consumer consumes 7 messages(why 7?) and then does
>> > nothing(even though there were 2 more unconsumed messages). When the 1st
>> > consumer completed processing it's message it got to consume the
>> remaining
>> > two messages too. (Did it kinda prefetch 2?)
>> >
>> > Can someone please tell me if Is this a bug or am I doing something
>> > completely wrong? I'm using the latest Java Broker & client (from trunk)
>> > with DerbyMessageStore for my tests.
>> >
>> > Also, can someone please tell me what'd be the best way to turn off
>> > prefetching?
>> >
>> > Thanks a lot,
>> > Praveen
>> >
>> >
>> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz  wrote:
>> >
>> >> Hi Praveen,
>> >>
>> >> Have you set the capacity / prefetch for the receivers to one message?
>> >> I believe the capacity defines how many messages can be "buffered" by
>> >> the client API in background while you are still processing the first
>> >> message. That may cause that both your clients receive 5 messages,
>> >> even when the processing in the first client takes a longer time.
>> >>
>> >> Regards
>> >> Jakub
>> >>
>> >> On Wed, Oct 26, 2011 at 03:02, Praveen M 
>> wrote:
>> >> > Hi,
>> >> >
>> >> > I ran the following test
>> >> >
>> >> > 1) I created 1 Queue
>> >> > 2) Registered 2 consumers to the queue
>> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
>> >> long
>> >> > running. I simulated such that the first message on consumption takes
>> >> about
>> >> > 50 seconds to be processed]
>> >> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
>> >> > 5) The 1st consumer that got the long running message works on it for
>> a
>> >> long
>> >> > time while the second consumer that got the second message keeps
>> >> processing
>> >> > and going to the next message, but  only goes as far until it
>> processes 5
>> >> of
>> >> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
>> >> > 6) When the 1st consumer with the  long running message completes, it
>> >> then
>> >> > ends up processing the remaining messages and m

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

2011-10-27 Thread Praveen M
Hi Robbie,

I was using asynchronous onMessage delivery with transacted session for my
tests.

So from your email, I'm afraid it might be an issue. It will be great if you
could investigate a little on this and keep us update.

Thanks a lot,
Praveen

On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
wrote:

> From the below, would I be right in thinking you were using receive()
> calls with an AutoAck session? If so then you would see the behaviour
> you observed as the message gets acked just before receive() returns,
> which makes the broker send the next one to the client. That shouldnt
> happen if you were using asynchronous onMessage delivery (since the
> ack gets since when the onMessage() handler returns), or if you you
> used a ClientAck or Transacted session in which you only acknowledged
> the message / commited the session after the processing is complete.
>
> I must admit to having never used the client with prefetch set to 0,
> which should in theory give you what you are looking for even with
> AutoAck but based on your comments appears not to have. I will try and
> take a look into that at the weekend to see if there are any obvious
> issues we can JIRA for fixing.
>
> Robbie
>
> On 26 October 2011 23:48, Praveen M  wrote:
> > Hi Jakub,
> >
> > Thanks for your reply. Yes I did find the prefetch model and reran my
> test
> > and now ran into another issue.
> >
> > I set the prefetch to 1 and ran the same test described in my earlier
> mail.
> >
> > In this case the behavior I see is,
> > The 1st consumer gets the 1st message and works on it for a while, the
> 2nd
> > consumer consumes 8 messages and then does nothing(even though there was
> 1
> > more unconsumed message). When the first consumer completed its long
> running
> > message it got around and consumed the remaining 1 message. However,  I
> was
> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> remaining
> > messages) while the 1st consumer was busy working on the long message.
> >
> > Then, I thought, perhaps the prefetch count meant that, when a consumer
> is
> > working on a message, another message in the queue is prefetched to the
> > consumer from the persistant store as my prefetch count is 1. That could
> > explain why I saw the behavior as above.
> >
> > What i wanted to achieve was to actually turn of any kinda prefetching
> > (Yeah, I'm ok with taking the throughput hit)
> >
> > So I re ran my test now with prefetch = 0, and saw a really weird result.
> >
> > With prefetch 0, the 1st consumer gets the 1st message and works on it
> for a
> > while, which the 2nd consumer consumes 7 messages(why 7?) and then does
> > nothing(even though there were 2 more unconsumed messages). When the 1st
> > consumer completed processing it's message it got to consume the
> remaining
> > two messages too. (Did it kinda prefetch 2?)
> >
> > Can someone please tell me if Is this a bug or am I doing something
> > completely wrong? I'm using the latest Java Broker & client (from trunk)
> > with DerbyMessageStore for my tests.
> >
> > Also, can someone please tell me what'd be the best way to turn off
> > prefetching?
> >
> > Thanks a lot,
> > Praveen
> >
> >
> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz  wrote:
> >
> >> Hi Praveen,
> >>
> >> Have you set the capacity / prefetch for the receivers to one message?
> >> I believe the capacity defines how many messages can be "buffered" by
> >> the client API in background while you are still processing the first
> >> message. That may cause that both your clients receive 5 messages,
> >> even when the processing in the first client takes a longer time.
> >>
> >> Regards
> >> Jakub
> >>
> >> On Wed, Oct 26, 2011 at 03:02, Praveen M 
> wrote:
> >> > Hi,
> >> >
> >> > I ran the following test
> >> >
> >> > 1) I created 1 Queue
> >> > 2) Registered 2 consumers to the queue
> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
> >> long
> >> > running. I simulated such that the first message on consumption takes
> >> about
> >> > 50 seconds to be processed]
> >> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
> >> > 5) The 1st consumer that got the long running message works on it for
> a
> >> long
> >> > time while the second consumer that got the second message keeps
> >> processing
> >> > and going to the next message, but  only goes as far until it
> processes 5
> >> of
> >> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
> >> > 6) When the 1st consumer with the  long running message completes, it
> >> then
> >> > ends up processing the remaining messages and my test completes.
> >> >
> >> > So it seems like the two consumers were trying to take a fair share of
> >> > messages that they were processing immaterial of the time it takes to
> >> > process individual messages. Enqueued message = 10, Consumer 1 share
> of 5
> >> > messages were processed by it, and Consumer 2's share of 5 messages
> were
> >> 

RE: Downloading the Built version of qpidc-0.12

2011-10-27 Thread Steve Huston
As far as I know, you need to build it yourself.

If you want a build that's supported, you could contact Red Hat to talk
about MRG.

-Steve

> -Original Message-
> From: Daniel Mounessa [mailto:dmoune...@tradeware.com]
> Sent: Wednesday, October 26, 2011 4:55 PM
> To: users@qpid.apache.org
> Subject: Downloading the Built version of qpidc-0.12
> 
> Is there a built version for Linux that I could download and use or do I
need to
> download the source code and rebuild the qpidd?
> 
> 
> 
> Thanks for your help.


-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org



Re: 1 Queue with 2 Consumers - turn off pre-fetching?

2011-10-27 Thread Robbie Gemmell
>From the below, would I be right in thinking you were using receive()
calls with an AutoAck session? If so then you would see the behaviour
you observed as the message gets acked just before receive() returns,
which makes the broker send the next one to the client. That shouldnt
happen if you were using asynchronous onMessage delivery (since the
ack gets since when the onMessage() handler returns), or if you you
used a ClientAck or Transacted session in which you only acknowledged
the message / commited the session after the processing is complete.

I must admit to having never used the client with prefetch set to 0,
which should in theory give you what you are looking for even with
AutoAck but based on your comments appears not to have. I will try and
take a look into that at the weekend to see if there are any obvious
issues we can JIRA for fixing.

Robbie

On 26 October 2011 23:48, Praveen M  wrote:
> Hi Jakub,
>
> Thanks for your reply. Yes I did find the prefetch model and reran my test
> and now ran into another issue.
>
> I set the prefetch to 1 and ran the same test described in my earlier mail.
>
> In this case the behavior I see is,
> The 1st consumer gets the 1st message and works on it for a while, the 2nd
> consumer consumes 8 messages and then does nothing(even though there was 1
> more unconsumed message). When the first consumer completed its long running
> message it got around and consumed the remaining 1 message. However,  I was
> expecting the 2nd consumer to dequeue all 9 messages(the number of remaining
> messages) while the 1st consumer was busy working on the long message.
>
> Then, I thought, perhaps the prefetch count meant that, when a consumer is
> working on a message, another message in the queue is prefetched to the
> consumer from the persistant store as my prefetch count is 1. That could
> explain why I saw the behavior as above.
>
> What i wanted to achieve was to actually turn of any kinda prefetching
> (Yeah, I'm ok with taking the throughput hit)
>
> So I re ran my test now with prefetch = 0, and saw a really weird result.
>
> With prefetch 0, the 1st consumer gets the 1st message and works on it for a
> while, which the 2nd consumer consumes 7 messages(why 7?) and then does
> nothing(even though there were 2 more unconsumed messages). When the 1st
> consumer completed processing it's message it got to consume the remaining
> two messages too. (Did it kinda prefetch 2?)
>
> Can someone please tell me if Is this a bug or am I doing something
> completely wrong? I'm using the latest Java Broker & client (from trunk)
> with DerbyMessageStore for my tests.
>
> Also, can someone please tell me what'd be the best way to turn off
> prefetching?
>
> Thanks a lot,
> Praveen
>
>
> On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz  wrote:
>
>> Hi Praveen,
>>
>> Have you set the capacity / prefetch for the receivers to one message?
>> I believe the capacity defines how many messages can be "buffered" by
>> the client API in background while you are still processing the first
>> message. That may cause that both your clients receive 5 messages,
>> even when the processing in the first client takes a longer time.
>>
>> Regards
>> Jakub
>>
>> On Wed, Oct 26, 2011 at 03:02, Praveen M  wrote:
>> > Hi,
>> >
>> > I ran the following test
>> >
>> > 1) I created 1 Queue
>> > 2) Registered 2 consumers to the queue
>> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
>> long
>> > running. I simulated such that the first message on consumption takes
>> about
>> > 50 seconds to be processed]
>> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
>> > 5) The 1st consumer that got the long running message works on it for a
>> long
>> > time while the second consumer that got the second message keeps
>> processing
>> > and going to the next message, but  only goes as far until it processes 5
>> of
>> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
>> > 6) When the 1st consumer with the  long running message completes, it
>> then
>> > ends up processing the remaining messages and my test completes.
>> >
>> > So it seems like the two consumers were trying to take a fair share of
>> > messages that they were processing immaterial of the time it takes to
>> > process individual messages. Enqueued message = 10, Consumer 1 share of 5
>> > messages were processed by it, and Consumer 2's share of 5 messages were
>> > processed by it.
>> >
>> >
>> > This is kinda against the behavior that I'd like to see. The desired
>> > behavior in my case is that of each consumer keeps going on if it's done
>> and
>> > has other messages to process.
>> >
>> > In the above test, I'd expect as consumer 1 is working on the long
>> message,
>> > the second consumer should work its way through all the remaining
>> messages.
>> >
>> > Is there some config that I'm missing that could cause this effect?? Any
>> > advice on tackling this will be great.
>> >
>> > Also

Re: 1 Queue with 2 Consumers - message delivery order?

2011-10-27 Thread Robbie Gemmell
As Jakub mentioned, this '5 messages each' behaviour is the result of
prefetch (and your consumers being present before you started
publishing). Because the consumers had space in their prefetch buffer,
the messages were basically round-robin delivered to the clients as
they were enqueued.

The delivery mechanism is quite complicated as there are 3 ways
messages get dispatched to the client, depending on the state of each
client when the message is enqueued and the prefetch configuration.
The queue makes a basic attempt immediately to round-robin deliver
incoming messages to the consumers if there is mroe than 1 present on
a queue. If the message cant be delivered to a consumer immedaitely
upon enqueue then an asynchronous delivery process is started to
attempt message delivery, which again makes a basic attempt to
round-robin the messages on the queue to the available consumers. A
third mechanism kicks in when a client newly connects or goes from not
having room in its prefetch buffer to having room, in which a
different asynchronous delivery mechnism attempts to deliver messages
to that specific consumer. Different queues operate entirely
independently in delivery terms, other than use of a shared thread
pool for the asynchronous delivery processing.

Robbie

On 26 October 2011 02:02, Praveen M  wrote:
> Hi,
>
> I ran the following test
>
> 1) I created 1 Queue
> 2) Registered 2 consumers to the queue
> 3) Enqueued 10 messages to the Queue. [ The first enqueued message is long
> running. I simulated such that the first message on consumption takes about
> 50 seconds to be processed]
> 4) Once the enqueue is committed, the 2 consumers each pick a message.
> 5) The 1st consumer that got the long running message works on it for a long
> time while the second consumer that got the second message keeps processing
> and going to the next message, but  only goes as far until it processes 5 of
> the 10 messages enqueued. Then the 2nd consumer gives up processing.
> 6) When the 1st consumer with the  long running message completes, it then
> ends up processing the remaining messages and my test completes.
>
> So it seems like the two consumers were trying to take a fair share of
> messages that they were processing immaterial of the time it takes to
> process individual messages. Enqueued message = 10, Consumer 1 share of 5
> messages were processed by it, and Consumer 2's share of 5 messages were
> processed by it.
>
>
> This is kinda against the behavior that I'd like to see. The desired
> behavior in my case is that of each consumer keeps going on if it's done and
> has other messages to process.
>
> In the above test, I'd expect as consumer 1 is working on the long message,
> the second consumer should work its way through all the remaining messages.
>
> Is there some config that I'm missing that could cause this effect?? Any
> advice on tackling this will be great.
>
> Also, Can someone please explain in what order are messages delivered to the
> consumers in the following cases?
>
> Case 1)
>  There is a single Queue with more than 1 message in it and multiple
> consumers registered to it.
>
> Case 2)
> There are multiple queues each with more than 1 message in it, and has
> multiple consumers registered to it.
>
>
>
> Thank you,
> --
> -Praveen
>

-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org



Re: Qpid Java Broker performance lower than expected

2011-10-27 Thread Robbie Gemmell
The reason I said that was that it takes a *lot* longer to run
some(/all) of the system tests when using the DerbyStore, but doing
some very noddy tests today with a single consumer and producer showed
there wasnt any great difference between them. Both were noticably
slower than historically, so it is something we will be looking into
improving. One particular system test I previously noticed a large
difference in uses multiple consumers and producers though, and so
that could actually be where the difference lies because the BDB store
is implemented somewhat differently to the Derby one and so possibly
has an artificial edge in that regard.

Robbie

On 26 October 2011 03:14, Danushka Menikkumbura
 wrote:
> Hi Robbie,
>
> I did not notice that the BDB store was faster than the Derby store when I
> checked some time back.
>
> Thanks,
> Danushka
>
> On Wed, Oct 26, 2011 at 5:07 AM, Robbie Gemmell 
> wrote:
>
>> Hi Vinay,
>>
>> I havent done any performance benchmarking of the Derby store to know
>> what a representative number would actually be, but I will try to take
>> a look at some point. I havent actually used QpidBench, so can I ask
>> if there were any specific command(s) you ran so I can try the same
>> scenarios?
>>
>> We havent paid much attention to performance of the Java broker for a
>> while unfortunately because we have been working on various other
>> issues such agetting memory usage under control and sorting out
>> correctness issues etc since adding a newer protocol version and doing
>> some significant refactorings and reimplementations, but as we reach
>> the light at the end of the tunnel on those it is something which
>> should move further up the priority list.
>>
>> It is worth nothing that there is also a BDB persistent store for the
>> Java broker that you might want to look at, as I would expect it to be
>> faster. It has recently been moved into the main repo, but is still an
>> optional module which you need to explicitly ask for to be built
>> (because BDB itself uses the Sleepycat Licence, which invokes
>> restrictions upon distribution that mean it is not Apache Licence
>> compatible). You can build the store module and include it (but not
>> BDB itself) in the broker binary release bundle by using the following
>> build command:
>>
>> ant build release-bin -Dmodules.opt=bdbstore -Ddownload-bdb=true
>>
>> You will find that downloads the bdb je jar into
>> qpid/java/lib/bdbstore, and then creates a broker binary release in
>> qpid/java/broker/release which includes the additional store module.
>> You can make the BDB je jar available to the broker by creating a
>> lib\opt subdir and copying the je jar into it, where it will get
>> picked up automatically assuming you are using Java 6+. You can then
>> use org.apache.qpid.server.store.berkeleydb.BDBMessageStore as the
>> store class config instead of the other stores.
>>
>> Robbie
>>
>> On 24 October 2011 16:25, vipun  wrote:
>> > Hi,
>> >  I'm collecting performance figures for QPID Java based broker. The
>> results
>> > which i got after running the  QpidBench program are a little lower than
>> > expected. My machine which is a quad core, 8GB RAM with Windows 7 gives a
>> > message throughput of around 400 messages when both producer and consumer
>> > client instances are active.
>> >
>> > Qpid Java broker is configured to run over Derby and messaging is in
>> > persistent mode.  I was expecting somewhere around 1000 atleast going by
>> the
>> > following blog which does comparisons between different messaging
>> providers.
>> >
>> > http://bhavin.directi.com/rabbitmq-vs-apache-activemq-vs-apache-qpid/
>> >
>> > Do you think, the figures from my tests are correct, or what are the
>> > expected performance results, or are there any tweaks which need to be
>> done
>> > for performance gains. I am running out of trunk.
>> >
>> > Thanks & Regards
>> > Vinay
>> >
>> > --
>> > View this message in context:
>> http://apache-qpid-users.2158936.n2.nabble.com/Qpid-Java-Broker-performance-lower-than-expected-tp6925405p6925405.html
>> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
>> >
>> > -
>> > Apache Qpid - AMQP Messaging Implementation
>> > Project:      http://qpid.apache.org
>> > Use/Interact: mailto:users-subscr...@qpid.apache.org
>> >
>> >
>>
>> -
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscr...@qpid.apache.org
>>
>>
>

-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org



Downloading the Built version of qpidc-0.12

2011-10-27 Thread Daniel Mounessa
Is there a built version for Linux that I could download and use or do I
need to download the source code and rebuild the qpidd?

 

Thanks for your help.



compiling with mingw on windows AI_ADDRCONFIG was not defined

2011-10-27 Thread joseluis
I've downloaded qpid from git repository

Last commit...
QPID-3504: ensure the glue for the optional bdbstore feature is part of the
broker binary package
676b55c23fafb4763a5f89586fd6a357d8783b85


I'm compiling qpid with mingw  gcc version 4.4

I have the error 

AI_ADDRCONFIG was not declared


This is a mingw error reported on

http://sourceforge.net/tracker/index.php?func=detail&aid=3156970&group_id=101989&atid=630607


To compile, I've replaced on SocketAddress.cpp (windows directory) next line

#define _WIN32_WINNT 0x501

with...


#define _WIN32_WINNT 0x600
#ifndef AI_ADDRCONFIG
#define AI_ADDRCONFIG 0
#endif



BTW with mingw gcc 4.4 it's also necesary to remove  -Werror




kind regards


--
View this message in context: 
http://apache-qpid-users.2158936.n2.nabble.com/compiling-with-mingw-on-windows-AI-ADDRCONFIG-was-not-defined-tp6935826p6935826.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

-
Apache Qpid - AMQP Messaging Implementation
Project:  http://qpid.apache.org
Use/Interact: mailto:users-subscr...@qpid.apache.org