Could you also provide the steps you followed to reproduce the issue? It's
important to not only have the code & configuration resources but the exact
procedure you followed so we are comparing apples to apples, as it were.
An automated process would be ideal here as that drastically speeds up
Hi Justtin,
I have a simple project that can be used to reproduce the redistribution
issue at https://github.com/lnthai2002/SimpleArtemisClient.
Hope you have sometime to take a look,
Thai
On Wed, Jun 2, 2021 at 1:07 PM Thai Le wrote:
> Hi Justtin,
>
> I am still working on the JMS queue for
Justin,
Thanks for your response!
Just a moment ago our infrastructure-guy told us that the cause is likely in
the network between the nodes.
I have good hope that this is true because it is consistent with most/all
symptoms.
(so I'll skip creating the reproducer-project for now)
> With
Hi Justtin,
I am still working on the JMS queue for now. I set the
connection-ttl-override = 6 explicitly on the server:
...
6
With performance testing the devil is in the details as they say. It would
be ideal to have something to run which would reproduce the (relative)
numbers you're seeing. Could you drop a project on GitHub or something to
this end?
At the very least can you elaborate on your test setup? When you
It looks like all messages in your broker expire after max 1 second, since
the timeStampingBrokerPlugin will set the TTL to 1 second if it is absent
or >1s.
Your original question says that you're accessing the messages before they
reach their expiration times, which means within 1s of them being
Hi JB
The specific cause of the error is found, which is caused by the inconsistency
between the time of the broker and the time of the client??
----
??:
Hello,
We are performing a performance test by just putting a lot of messages through
one queue.
With producer and consumer on one node we get around 2000 msg/sec.
But when producer and consumer are each on a separate cluster-node, it drops to
350 msg/sec.
This is strange because each node