ining
> multiple connectors with their autolinks.
> * JMS overhead and serialization/de-serialization might be also a
> bottleneck.
>
> Regards,
> Adel
>
> > From: robbie.gemm...@gmail.com
> > Date: Thu, 4 Aug 2016 10:58:13 +0100
> > Subject: Re: [Performance] Benchma
On Tue, 2016-08-02 at 14:44 -0400, Ted Ross wrote:
>
> On 08/02/2016 02:10 PM, Adel Boutros wrote:
[snip]
> >
> What you both explained to me about the single connection is indeed
> > a plausible candidate because in the tests of "broker only", the
> > throughput of a single connection is around
n the test. I deactivated the logging and with a dispatcher only, I am
> > at around 47 000 msg/s with asynchronous sending.
> >
> >> From: adelbout...@live.com
> >> To: users@qpid.apache.org
> >> Subject: RE: [Performance] Benchmarking Qpid dispatch
Qpid dispatch router 0.6.0 with Qpid
>> Java Broker 6.0.0
>> Date: Wed, 3 Aug 2016 18:39:23 +0200
>>
>> And how do you measure your throughput?
>>
>> > From: adelbout...@live.com
>> > To: users@qpid.apache.org
>> > Subject: RE: [Performance] Benchma
wrote:
And how do you measure your throughput?
From: adelbout...@live.com
To: users@qpid.apache.org
Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
Date: Wed, 3 Aug 2016 18:38:12 +0200
Hello Ulf,
I am sending messages with a byte array of 100 bytes
chmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> Date: Wed, 3 Aug 2016 18:39:23 +0200
>
> And how do you measure your throughput?
>
> > From: adelbout...@live.com
> > To: users@qpid.apache.org
> > Subject: RE: [Performance] Benchmarking Qpid
And how do you measure your throughput?
> From: adelbout...@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> Date: Wed, 3 Aug 2016 18:38:12 +0200
>
> Hello Ulf,
>
> I am sending
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> To: users@qpid.apache.org
> From: l...@redhat.com
> Date: Wed, 3 Aug 2016 16:23:06 +0200
>
> Hi,
>
> Excuse me if this was already mentioned somewhere, but what
we are using synchronous sending. In the
future, we will also benchmark with full SSL/SASL to see what impact it has on
the performance.
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
To: users@qpid.apache.org
From: g...@redhat.com
Date: Tue, 2 Aug 2
ducers, 0 consumers, 3 connectors --> 7700 msg/s.
Adel
> From: adelbout...@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> Date: Tue, 2 Aug 2016 22:21:54 +0200
>
> Sorry for the typo. I
benchmark with full SSL/SASL to see what impact it has on
the performance.
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> To: users@qpid.apache.org
> From: g...@redhat.com
> Date: Tue, 2 Aug 2016 20:41:54 +0100
>
> On 02/08/1
On 02/08/16 20:25, Adel Boutros wrote:
How about the tests we did with consumer/producers connected directly to the
dispatcher without any broker where we had 16 000 msg/s with 4 producers. Is it
also a very low value given that there is no persistence or storing here? It
was also synchronous
synchronous sending.
If you're benchmarking throughput, you really want to avoid synchronous
sending. I think 16K msg/s synchronous with four senders sounds about
right.
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
To: users@qpid.apache.org
On 2 August 2016 at 21:21, Gordon Sim wrote:
> On 02/08/16 20:18, Ted Ross wrote:
>
>> Since this is synchronous and durable, I would expect the store to be
>> the bottleneck in these cases and that for rates of ~7.5K, the router
>> shouldn't be a factor.
>>
>
> I don't know
nce] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> To: users@qpid.apache.org
> From: g...@redhat.com
> Date: Tue, 2 Aug 2016 20:21:40 +0100
>
> On 02/08/16 20:18, Ted Ross wrote:
> > Since this is synchronous and durable, I would expect the sto
On 02/08/16 20:18, Ted Ross wrote:
Since this is synchronous and durable, I would expect the store to be
the bottleneck in these cases and that for rates of ~7.5K, the router
shouldn't be a factor.
I don't know anything about the java broker internals, but when going
through a router the
connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic("perf.topic");
messageProducer = session.createProducer(topic);
messageProducer.send(message);
Regards,
Adel
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Bro
I forgot to add we use durable queues and the persistence is set to DEFAULT.
> From: adelbout...@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> Date: Tue, 2 Aug 2016 21:10:35 +0200
>
&
");
> > Connection connection = connectionFactory.createConnection();
> > Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
> > Topic topic = session.createTopic("perf.topic");
> > messageProducer = session.createProducer(topic);
> &
On 02/08/16 19:44, Ted Ross wrote:
5.1K messages per second on a connection seems like a really low limit
to me. As I recall, we were able to get closer to 80K to 100K per
connection on qpidd.
If these are persistent messages (which I think is the default for JMS)
and the queue to which they
= session.createProducer(topic);
messageProducer.send(message);
Regards,
Adel
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
To: users@qpid.apache.org
From: tr...@redhat.com
Date: Tue, 2 Aug 2016 13:42:24 -0400
On 07/29/2016 08:40 AM, Adel Boutros wrote:
H
oducer.send(message);
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tr...@redhat.com
> Date: Tue, 2 Aug 2016 13:42:24 -0400
>
>
>
> On 07/29/2016 08:40 AM, Adel
On 02/08/16 18:29, Adel Boutros wrote:
Were you able to check the below? Can it be some other resource is being
congested in the code such as the mutex mechanism or the IO?
When going through the router, all the messages will be transferred to
the broker over a single connection. Are the
enabled up
endpoint out 47 95local temp.2u+DSi+26jT3hvZ 250 0
0 0 enabled up
Regards,
Adel
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
To: users@qpid.apache.org
From: tr...@redhat.com
Date: Tue
with Qpid
> Java Broker 6.0.0
> Date: Fri, 29 Jul 2016 14:45:48 +0200
>
> Here is an image representation of the badly formatted table:
> http://imgur.com/a/EuWch
> > From: adelbout...@live.com
> > To: users@qpid.apache.org
> > Subject: RE: [Performanc
Here is an image representation of the badly formatted table:
http://imgur.com/a/EuWch
> From: adelbout...@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> Date: Fri, 29 Jul 2016 14:40:10 +
0 enabled up
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> Java Broker 6.0.0
> To: users@qpid.apache.org
> From: tr...@redhat.com
> Date: Tue, 26 Jul 2016 10:32:29 -0400
>
> Adel,
>
> That's a good
to change linkCapacity. However, I was wondering if there is a way to
"calculate an optimal value for linkCapacity". What factors can impact this
field?
Regards,
Adel
Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
To: users@qpid.apache.or
Thanks Ted,
I will try to change linkCapacity. However, I was wondering if there is a way
to "calculate an optimal value for linkCapacity". What factors can impact this
field?
Regards,
Adel
> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
> J
Adel,
The number of workers should be related to the number of available
processor cores, not the volume of work or number of connections. 4 is
probably a good number for testing.
I'm not sure what the default link credit is for the Java broker (it's
500 for the c++ broker) or the clients
30 matches
Mail list logo