This sounds like application logic to me. It's the broker's job to try to
prevent re-delivery of the same message, but it's not the broker's job to
identify that two messages have similar content. That's the job of the
consumer of your messages, using whatever heuristics are most appropriate
to y
For that kind of reliability, I would look outside of ActiveMQ itself. Use a
database and/or idempotent processing (e.g. for a transfer, including the
original balances before the transfer can make it easy to reject duplicate
transfers).
JMS always makes it possible to send duplicates. It's in t
Are there any decent strategies to prevent sending the same message to the
queue twice?
Say for example it’s a money transfer … if you initiate the transfer twice
you’d transfer too much money. If you have an algorithm that retries you
would want it to NOT enqueue the message transfer message aga
Memory percent used represents messages sitting in the queue not being
consumed.
Look for messages that are not consumed for any reason. If messages are
passing through the queue, but some do not, check for the use of selectors.
Looking at the pending message count for the Queue can help detect
That code is trying to communicate with the broker without using the broker's
protocol (openwire). So, the broker is really complaining that it cannot
properly parse the content.
The correct way to use TCP/IP to connect to the broker is to create an
ActiveMQConnectionFactory() using the tcp trans
Yeah, it should work fine. Just keep in mind that messages on a broker are
only reactively moved to another broker, and they only ever live on a single
broker at a time (for the most part).
So, shutting down a broker with messages that have not drained will either
lose those messages (for non-per
Check out the multi-kahadb adapter which can accomplish the task; it's on the
KahaDB page:
http://activemq.apache.org/kahadb.html
--
View this message in context:
http://activemq.2283324.n4.nabble.com/separate-persistence-for-main-queue-from-dlq-tp4686555p4686565.html
Sent from the ActiveMQ -
I'm working on comparing AMQP 1.0 clients with different brokers. In this case
I have ActiveMQ (5.10) and Qpidd (trunk, proton at about 0.8) brokers on a
Fedora 19 localhost. For the client I have a qpidmessaging client test that
moves unreliable messages.
The test client program:
For the f
Hi,
We are using ActiveMQ 5.10.0 with the replicated LevelDB store, with leader
election handled by ZooKeeper.
We have encountered an issue where the ZooKeeper session expired on all of
our 3 AMQ instances.
However, no leader election took place to elect a master after a new
session was establish
Take a look at whether the JVM is doing a full garbage collect at the time
when the failover occurs. Our team has observed clients to failover to an
alternate broker at a time that corresponded to a full GC, and it might be
that the same thing is happening here (but the failover isn't happening
gr
OK, I wasn't sure that the queue policy for that covered the DLQ, though it
makes sense that the DLQ is treated like any other queue rather than
handled separately. Thanks for clarifying.
On Mon, Oct 20, 2014 at 4:41 AM, Gary Tully wrote:
> yes. periodic expiry processing. controlled by destina
If you have a network of brokers, messages on topics will be forwarded to
whichever broker the consumer connects to, without duplicate delivery of
any messages so long as no messages were processed by the consumer without
being ack'ed. If you were using queues, there's the potential for messages
t
I'm looking for information on how to configure the dlq to use a different
persistence store than the main queues from which they are based. Is this
possible in amq? I want to use kahadb for the main queues and use postgres
for the dlq.
My main motivation is that I don't want a catastrophic event
Ok, looks like the issue is back again.
The network issues have been fixed.
It is *not* a slow network - pings between VMs are less than 1ms.
I have not investigated the different throughput but wanted to focus on the
reliability of the replicated message store.
I made some configuration changes
We have a hub/spoke architecture. We did a rolling upgrade from 5.7.0 to
5.10.0 starting with the hub instance. So long as the client/server & peer
protocols don't become incompatible it's fine.
On 15 October 2014 21:39, djdick wrote:
> We're looking to upgrade from 5.9 to 5.10 and are using a n
yes. periodic expiry processing. controlled by destination
policy expireMessagesPeriod > 0
On 17 October 2014 22:12, Tim Bain wrote:
> Those plugin modifications will get the message into the DLQ with an
> expiration date, but is there anything that will cause messages in the DLQ
> to be deleted
16 matches
Mail list logo