If concurrent access can be reproduced in a dev environment, it is possible
- with temporary code changes - to determine what two threads were
accessing the message at the same time.
An effective means to track this down is to record Exception objects (and
hence stack traces) at each point
Art,
Concurrent store and dispatch should be off by default for topics (which I
believe the test case is using). Also, it wouldn't hit the store since
it's a topic and not a durable subscription so should just be in memory.
That's why this issue is weird as the messages shouldn't be concurrently
Ahh, I bet that is a result of using the VM transport. The broker is
likely doing the "beforeMarshal()" call that clears the text just before
the client is calling "getText()" on the same message - or just before
"copy()" is called to make a copy of the message.
Yeah, looking at the code
Yeah, TCP should be fine. Even if this gets fixed it will take time before
a new release is done anyways so you'll need to use the TCP or NIO for a
bit.
On Thu, May 31, 2018 at 3:55 PM, Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:
> Found the culprit: Seems to be related to
Found the culprit: Seems to be related to
https://issues.apache.org/jira/browse/AMQ-5857
Specifically, this commit:
Ok, thank you...my thought also was that performance impact of using tcp
wouldn't be enough to hurt us in any substantial way, even though it *feels*
really wasteful to be marshaling the message and using TCP when we're in the
same JVM.
In response to your first message, I changed the transport
I did some quick testing and it looks like the first version it breaks is
5.12.0. It seemed to work ok in 5.11.3.
5.12.0 was a big release though with over 200 issues so will still need to
narrow it down more.
In the meantime I would say if the TCP transport works then go with that, I
doubt you
The current workaround we've been testing is to change to the TCP transport
from the VM transport on the code that runs inside the broker process (where
we want to guarantee that some queues or topics are always are being dealt
with as long as ActiveMQ is running). That seems like it would be a
A lot of changes happened between 5.10 and 5.15. Knowing the first version
that it broke would be helpful to narrow down the change that broke it.
Even better would be to use 'git bisect' and find the exact commit that
introduced the issue.
On Thu, May 31, 2018 at 3:14 PM, codingismy11to7
I put some text in the (very long) README in the reproduction project that
kind of addresses this, but I didn't really specify that yes - I did throw
in a debug library of the wrapper that printed out when the getText() call
returned null at the call site, and it absolutely is returning null (the
Another thing I didn't dig into earlier - I generally avoid the VM
transport. At some point in the past, I ran into a deadlock with the VM
transport that doesn't exist with the other transports because the VM
transport performs an operation synchronously that the interface explicitly
defines as
Steven,
I think you might need to do some more debugging to try and pinpoint the
exact point where the body is null unless others have more time to look at
it. As Art said, check different points where the body could be null.
I originally thought I had found the issue when I tested
I forgot that when using the VM transport the message is supposed to be
copied on dispatch inside ActiveMQConnection so this may not be the exact
issue, I need to look at it a bit more.
On Thu, May 31, 2018 at 9:41 AM, Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:
> The issue is
The issue is when using the VM transport and the getText() method has to
unmarshall data back into text from the byte sequence. This happens
because you go from the NIO to VM transport.
The main problem is you have multiple threads (3 consumers) at the same
time calling getText() on the text
Try turning the broker and client logging up to trace and see what the
logging shows for the message content.
Also, try the following:
- Same test with a Java producer (using plain ActiveMQ libs) instead of the
Scala producer
- Same test with a Java consumer (using plain ActiveMQ libs) instead of
15 matches
Mail list logo