Hi Ramayan,
On 4 January 2017 at 23:54, Ramayan Tiwari wrote:
> Hi Lorenz,
>
> Happy new year to everyone, hope you guys had fun!
>
> I am doing performance test runs to figure out a reasonable threshold for
> direct memory, considering our use case of small message
Hi Lorenz,
Happy new year to everyone, hope you guys had fun!
I am doing performance test runs to figure out a reasonable threshold for
direct memory, considering our use case of small message payloads. I have a
few more questions:
1. I assume there is no way to disable direct memory (so that
Hi,
Regarding 0.32 behaviour, it checked to see whether to flow a message
to disk when putting a message on the Queue the same way Qpid 6 does.
In that sense 6 is not more or less aggressive. However, the
algorithm behind the decision whether or not to flow to disk has
changed. This change was
Hi Lorenz,
Thanks a lot for your response and explaining the flow to disk algorithm in
detail. I have described the test setup in detail in the first email of
this thread, to summarize the points again:
a) There is only one virtual host.
b) There are 6000 queues in this virtual host, but messages
Hello Ramayan,
glad to hear that the patch is (mostly) working for you.
To address your points:
1. If indeed in one case flow to disk is kicking in while in
the other one it is not, then I am not surprised that
there is a 5% difference. The question is whether the
flow
Hi Rob,
I did another exhaustive performance test using the MultiQueueConsumer
feature with 6.0.5 (and the patch). The broker CPU issues has been resolved
and we no longer have the problem message prefetch (caused by long running
message).
Fairness among queue is also great (not as perfect as
Hi Ramayan
QPID-7462 is a new (experimental) feature, so we don't consider this
appropriate for inclusion in the 6.0.5 defect release We follow a
Semantic Versioning[1] strategy.
The underlying issue is your testing has uncovered is poor performance
with large numbers of consumers. QPID-7462
Hi Rob,
I have the truck code which I am testing with, I haven't finished the test
runs yet. I was hoping that once I validate the change, I can simply
release 6.0.5.
Thanks
Ramayan
On Thu, Oct 27, 2016 at 12:41 PM, Rob Godfrey
wrote:
> Hi Ramayan,
>
> did you verify
Hi Ramayan,
did you verify that the change works for you? You said you were going to
test with the trunk code...
I'll discuss with the other developers tomorrow about whether we can put
this change into 6.0.5.
Cheers,
Rob
On 27 October 2016 at 20:30, Ramayan Tiwari
Hi Rob,
I looked at the release notes for 6.0.5 and it doesn't include the fix for
large consumers issues [1]. The fix is marked for 6.1, which will not have
JMX and for us to use this version requires major changes in our monitoring
framework. Could you please include the fix in 6.0.5 release?
Hi Rob,
Again, thank you so much for answering our questions and providing a patch
so quickly :) One more question I have: would it be possible to include
test cases involving many queues and listeners (in the order of thousands
of queues) for future Qpid releases, as part of standard perf
Thanks so much Rob, I will test the patch against trunk and will update you
with the outcome.
- Ramayan
On Tue, Oct 18, 2016 at 2:37 AM, Rob Godfrey
wrote:
> On 17 October 2016 at 21:50, Rob Godfrey wrote:
>
> >
> >
> > On 17 October 2016 at
On 17 October 2016 at 21:50, Rob Godfrey wrote:
>
>
> On 17 October 2016 at 21:24, Ramayan Tiwari
> wrote:
>
>> Hi Rob,
>>
>> We are certainly interested in testing the "multi queue consumers"
>> behavior
>> with your patch in the new broker.
Hi Rob,
We are certainly interested in testing the "multi queue consumers" behavior
with your patch in the new broker. We would like to know:
1. What will the scope of changes, client or broker or both? We are
currently running 0.16 client, so would like to make sure that we will able
to use
Thanks Rob. Apologies for sending this over weekend :(
Are there are docs on the new threading model? I found this on confluence:
https://cwiki.apache.org/confluence/display/qpid/IO+Transport+Refactoring
We are also interested in understanding the threading model a little better
to help us
So I *think* this is an issue because of the extremely large number of
consumers. The threading model in v6 means that whenever a network read
occurs for a connection, it iterates over the consumers on that connection
- obviously where there are a large number of consumers this is
burdensome. I
Hi Rob,
Thanks so much for your response. We use transacted sessions with
non-persistent delivery. Prefetch size is 1 and every message is same size
(200 bytes).
Thanks
Ramayan
On Sat, Oct 15, 2016 at 2:59 AM, Rob Godfrey
wrote:
> Hi Ramyan,
>
> this is interesting...
Hi Ramyan,
this is interesting... in our testing (which admittedly didn't cover the
case of this many queues / listeners) we saw the 6.0.x broker using less
CPU on average than the 0.32 broker. I'll have a look this weekend as to
why creating the listeners is slower. On the dequeing, can you
18 matches
Mail list logo