If you have some blocking code in a MessageListener which throws
InterrupedException it isn't raised when you call consumer.close.
There are legit reasons for this including blocking on a CountdownLatch..
otherwise I have to implement code to poll() checking if the consumers have
been closed ...
I have a cluster of ActiveMQ boxes and a network of consumers.
The problem I'm having now is that the prefetch on the consumers fetches
too many messages and then other machines in the cluster get starved out of
work.
What I really want is a pattern where I can have say 50 ActiveMQ brokers
and
Looks like a workaround is to set jms.nonBlockingRedelivery=true
... then the normal dispatch path is used which then calls the message
activity listener.
On Thu, Jun 9, 2016 at 6:36 PM, Kevin Burton <bur...@spinn3r.com> wrote:
> OK.. I think this is a bug.. looks like rollback
on
nonBlockingRedelivery...
On Thu, Jun 9, 2016 at 6:22 PM, Kevin Burton <bur...@spinn3r.com> wrote:
> I'm trying to use a message available listener to notify me when I have a
> message after a rollback.
>
> https://gist.github.com/burtonator/ebf06a7238bd9a1273853ce5282acf02
>
I'm trying to use a message available listener to notify me when I have a
message after a rollback.
https://gist.github.com/burtonator/ebf06a7238bd9a1273853ce5282acf02
I'm doing this so that I can do non-blocking receive.
However, it's only called once, in the initial receive. Even if I do 5
We have a problem where all work is given to ONE host in our cluster. What
then happens is that this box goes to 100% CPU and other boxes are idle and
need more work.
We have an activemq setup where we create 16 connections to ActiveMQ (one
per core), and then one session per thread with a
s the database
> doesn't have to query to know which batches are complete, only be capable
> of retrieving the messages for a single group on demand.
>
> Tim
>
> On Sun, Jan 24, 2016 at 1:16 PM, Kevin Burton <bur...@spinn3r.com> wrote:
>
> > I have a pattern which I
I have a pattern which I think I need advice on...
I have three tasks... each a type of message consumer.
Let's call them A, B, an C.
A runs once,, creates 15 messages, sends them to B... then B process these
messages then generates 15 new messages.
However, they need to be combined into a
>
>
>
> I think there's a limit to how many redelivery attempts you're willing to
> take before to send the message to the DLQ, which I think would cover most
> scenarios when that would happen in the wild. (You could always construct
> an arbitrarily bad failure case, but the odds of actually
I'm finding the documentation for usePrefetchExtension to be rather lacking
What exactly does it do?
The documentation says:
> the prefetch extension is used when a message is delivered but not acked,
such that the broker can dispatch another message (e.g., prefetch == 0),
the idea being that
Nevermind! I had a weird bug in my test. Turns out it does exactly what I
expected it to do. New messages aren't dispatched until you call
acknowledge() or commit.. which is just what I want!
KEvin
On Sat, Oct 31, 2015 at 10:49 AM, Kevin Burton <bur...@spinn3r.com> wrote:
> I'
The JMS threading restrictions are here:
https://docs.oracle.com/cd/E19340-01/820-6767/aeqdb/index.html
which basically say if you're using a MessageListener you have to work with
that message/session within the given onMessage function.
However, I don't think that's true WRT ActiveMQ is it?
I
>
>
> You have to remember that the specs are generally written from the
> application developer's standpoint. As a result, application developers
> must assume that for a portable application to work, the below is true.
> NOte that it doesn't say that the client must throw an exception, etc.
>
sorry for the delay in reply. was dealing with a family issue that I
needed to prioritize...
On Wed, Oct 21, 2015 at 6:52 AM, Tim Bain wrote:
> Right off the top, can't you use INDIVIDUAL_ACK here, rather than
> committing transactions? That seems like the ideal mode to
ch extension for
> specific destinations.
>
> - Martin
>
>
>
> On 20.10.2015 04:15, Kevin Burton wrote:
>
>> We have a problem whereby we have a LARGE number of workers. Right now
>> about 50k worker threads on about 45 bare metal boxes.
>>
>> We have
the JVM
>
> Any recommendations on the increase?
>
>
>
> Regards,
>
> Barry Barnett
> Enterprise Queuing Services | (QS4U) Open Queuing Services Wells Fargo
> Cell: 803-207-7452
>
>
> -Original Message-
> From: burtonator2...@gmail.com [mailto:b
This is memory.
Increase ActiveMQ memory if you still have the problem try upgrading
to Java 8 as it's better with GC...
On Tue, Oct 20, 2015 at 5:48 AM, wrote:
> We are receiving the following errors: Any idea where I might look to
> figure this one out? I
Looks like both of these properties aren't changed on redelivery.
Which kind of makes them less valuable. In my situation I think I can only
really use them when the delivery count is 1. Better than nothing
though...
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
I think we're having an issue with prefetch and some of our customers
stealing work and then sitting on it while other threads starve.
Is there a way to trace/listen the prefetch system on the clients?
Alternatively, a way to see the timestamp that a JMS message was
prefetched. This way I can
maybe repost the question? I can answer questions about compactions, log
structure merge trees and theoretical issues.
I haven't pushed into the LevelDB code much myself though.
On Fri, Oct 16, 2015 at 6:50 AM, Tim Bain wrote:
> Is there anyone on the list who's enough
mers (so your first consumer gets two of them initially), do you get
> both messages redelivered to the first consumer or only one?
>
> Tim
> On Sep 12, 2015 11:21 AM, "Kevin Burton" <bur...@spinn3r.com> wrote:
>
> > AH ! Good point about the prefetch policy. T
90f9b57269ae45
>
> On Fri, Sep 11, 2015 at 7:14 PM, Kevin Burton <bur...@spinn3r.com> wrote:
>
> > OK.
> >
> > For the life of me I can’t get this to work.
> >
> > https://gist.github.com/burtonator/eb7a70e1750080ca621e
> >
> > Basically I
OK.
For the life of me I can’t get this to work.
https://gist.github.com/burtonator/eb7a70e1750080ca621e
Basically I want to call rollback() so that a message is retries later.
This way if there’s a transient but like a database connection failing it
gets retried (but of course uses the retry
in parallel and that would need be addressed.
On Sun, Aug 9, 2015 at 3:43 PM, Kevin Burton bur...@spinn3r.com wrote:
Hey guys.
Right now the ActiveMQ integration takes a long time. Last time we
discussed this (not sure if it was on the list) it was about 24 hours.
I’ve been playing
Hey guys.
Right now the ActiveMQ integration takes a long time. Last time we
discussed this (not sure if it was on the list) it was about 24 hours.
I’ve been playing with our internal builds and using CircleCI’s parallel
integration and I reduced our builds from 50 minutes down to 15.
I think
I’m wondering if anyone has seen this yet.
We’re migrating to Java 8 and our integration tests now give this exception
intermittently. Not sure why..
java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.NameNotFoundException: jmxrmi
at
I really wish ActiveMQ had integrated support for jolokia :)
On Thu, Jul 23, 2015 at 9:24 PM, Tim Bain tb...@alumni.duke.edu wrote:
That frequency will be much better.
And I think the RESTful API is supposed to be faster if you're doing
queries in bulk, because JMX isn't bad within the JVM,
See if you can enable Java mission control and then use a threaded profile
of it in production. I’m REALLY happy with JMC. The license says you can
use it for development which I think is ok if you have a production box to
debug a problem and then disable it afterwards. Either way it’s very easy
If you want to pause message acknowledgement and wait before consuming more
messages then you should probably use a synchronous consumer instead and
just call consumer.receive() and not try and use a messageListener which
is asynchronous.
I have to move away from synchronous because we
If you’re using a MessageListener, what’s the best way to use that Message
in other threads? I was thinking of reading the message as a string, then
forwarding the *string* to other threads, with just a reference to the
message. Then stick it back in a queue so that the original thread can
commit
I have a threaded app .. so let’s say I have 100 threads waiting to do work.
I want an async message listener to read these messages, UP TO 100
messages, until I can process and commit() them.
But I don’t think there’s a way to do this.
I had ASSUMED that setting a prefetch of say 10, and a
at
that wrong.
Tim
On Mon, Apr 6, 2015 at 1:58 PM, Kevin Burton bur...@spinn3r.com wrote:
Pretty sure getMessage() in MemoryMessageStore has a bug.
All access to messageTable is synchronized. this method is not. This
means that there’s a race where a message can go into the queue
Eeek. Are you sure you're not wasting more resources switching contexts
than you're gaining by nominally keeping a thread on all cores at all
times? (Some of that CPU time is being spent moving threads around rather
than running them.)
The threads are used to hide IO for the most part..
You can use JMX to work with a queue and delete messages manually. It’s not
amazingly fast in bulk though. .
On Mon, Jun 8, 2015 at 10:01 PM, Tim Bain tb...@alumni.duke.edu wrote:
I thought the whole point of QueueBrowser was to allow browsing (not
consuming) messages. Am I wrong to think
at 8:58 AM, Kevin Burton bur...@spinn3r.com wrote:
I can see two potential problems that your description didn't draw a line
between:
1. With a large prefetch buffer, it's possible to have one thread have
a
large number of prefetched tasks and another have none, even if all
tasks
I think I’m seeing a situation where the broker isn’t sending messages on
queue A when there are a lot of messages on queue B..
I have consumers listening but just not receiving messages.
I think at my volume, we fix one bug, only to encounter enough one :-P
--
Founder/CEO Spinn3r.com
Advisories break when using the memory store. A warning that a null
pointer
exception was caught goes to the log but the advisories aren’t raised.
OK, thanks for sharing. Have you created a bug report for it? If not, can
you do that so it doesn't get lost?
I think I did.. I will
I can see two potential problems that your description didn't draw a line
between:
1. With a large prefetch buffer, it's possible to have one thread have a
large number of prefetched tasks and another have none, even if all
tasks
take an average amount of time to complete. No
I think I’m in a weird edge situation caused by a potential bug / design
flaw.
I have a java daemon that needs to process tasks as much as possible. It’s
a thread per task model with each box having a thread per session and
consumer.
This is required per activemq/jms:
at 00:06, Kevin Burton bur...@spinn3r.com wrote:
I’m trying to track down performance issues for our broker.
I wrote a quick stopwatch around sending messages and it’s taking 2-6
seconds to send requests to ActiveMQ.
I have NO idea why this could be because I have reasonable CPU
Thanks for creating the issues!
The problem with my patch set, is that I’m still
stuck on 5.10.2. There’s a bug introduced sometime around 5.11 that only
impacts the memory store. I haven’t been able to track it down yet so I
can’t retarget my patches to head.
Can you provide more
might not be aware
of?
I submitted https://issues.apache.org/jira/browse/AMQ-5823 to capture
this.
Tim
On Tue, Jun 2, 2015 at 5:15 PM, Kevin Burton bur...@spinn3r.com wrote:
Here’s another issue I found..
http://i.imgur.com/EeBNiJK.png
Im trying to figure out why this is needed
Btw.. it looks like you can set useConsumerPriority=false on the
destination policy entry and get a free 5% performance boost.
On Tue, Jun 2, 2015 at 8:16 AM, Kevin Burton bur...@spinn3r.com wrote:
Both are decent... Just not really for production use. One of the cool
things about java
Here’s another issue I found..
http://i.imgur.com/EeBNiJK.png
Im trying to figure out why this is needed. Shouldn’t each consumer add
themselves as a subscription?
At least in our situation, it seems like ActiveMQ could be 60% faster.
--
Founder/CEO Spinn3r.com
Location: *San Francisco,
On Tue, Jun 2, 2015 at 6:16 AM, Tim Bain tb...@alumni.duke.edu wrote:
Kevin,
Great finds. What tool were you using?
Java Mission Control.. Free for development.. but I think the community
needs a real open source tool. Pseudo -free isn’t a good idea.
Is it safe to assume you'll submit
/ and JProfiler
http://www.ej-technologies.com/products/jprofiler/overview.html are great
commercial tools, and they have licenses for open source projects.
On Tue, Jun 2, 2015 at 11:19 AM Kevin Burton bur...@spinn3r.com wrote:
On Tue, Jun 2, 2015 at 6:16 AM, Tim Bain tb...@alumni.duke.edu
We deployed a continuous profiler at work today and so far the results look
really interesting.
Definitely worth the investment to setup!
Look like we’re spending 50% of our time here:
org.apache.activemq.broker.jmx.ManagedRegionBroker
@Override
public void
OK.. so funny story.
Our internal code base is named artemis.
We usually have submodules named artemis-foo or artemis-bar.
We have one now named artemis-activemq…Which is our embedded activemq.
So if we use this new version of artemis, our submodule would be
artemis-artemis
:-P
On Mon, Jun
I’m trying to track down performance issues for our broker.
I wrote a quick stopwatch around sending messages and it’s taking 2-6
seconds to send requests to ActiveMQ.
I have NO idea why this could be because I have reasonable CPU on this box
and what could be happening. Our broker runs out of
setTimeToLive works. Thanks for your help. You saved my application.
Ha! That rocks!
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
- the fact that the performance degradation was for all ActiveMQ and not
only for the scheduled queue
The *basic* ActiveMQ setup, with no features enabled, is somewhat fast and
non-complicated.
When you start enabling features, like advisories, scheduling, etc. it
becomes more complicated.
I
One warning about using the activemq scheduler is that it requires the use
of KahaDB. It doesn’t yet support LevelDB (or replication).
So if you go down the scheduler path, you’re stuck with KahaDB…
At least for the foreseeable future.
On Thu, May 21, 2015 at 1:14 AM, contezero74
The current version of ActiveMQ doesn’t really scale well over say
1000-2000 queues (if you GC them).
If you create one queue per client then you will have lots of queues.
I haven’t submitted my patches yet (trying to port our code to 5.11) but it
should resolve that situation.
On Thu, May 21,
If I have a queue with 100 messages, and a consumer with with prefetch=100,
and the consumer prefetches the entire queue, and the messages aren’t ack’d
yet, what’s the queue size?
I would think it would remain at 100..
Kevin
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog:
I hadn't expected purge() to call heapify(), so I was just expecting O(N)
runtime for the actual removal. The comments claim the runtime of the
purge operation is O(N + ClogN) where C is the number of deleted elements
(which would make it O(N) when C is 1, but looking at the code I think it's
I wasn't at all clear why changing the frequency (to never) of message
expiration checks would affect the performance of destination GC operations
when the number of queues is large. Is that due to synchronization between
those two operations, or thread contention because one or both takes
May 2015 at 23:08, Kevin Burton bur...@spinn3r.com wrote:
I’ve found 3 places where CopyOnWriteArrayList was being used and causing
significant performance impact O(N^2) when using large numbers of queues.
Could I get feedback on these 3 changes?
https://github.com/spinn3r/activemq/commit
On Thu, May 7, 2015 at 6:05 AM, Tim Bain tb...@alumni.duke.edu wrote:
The other reason a List sometimes gets used is when you want to be able to
arbitrarily order the elements. If you're using the natural ordering or
can write a comparator, you can use a SortedSet, but if you need them to be
:58, Kevin Burton bur...@spinn3r.com wrote:
On Thu, May 7, 2015 at 6:30 AM, Tim Bain tb...@alumni.duke.edu
wrote:
I agree with your approach with the WeakRunnable; I think that will
achieve
the goal without the performance hit of calling purge() after each
cancellation.
I
to fix it to use this.
On Fri, May 8, 2015 at 8:04 PM, Tim Bain tb...@alumni.duke.edu wrote:
On May 8, 2015 11:33 AM, Kevin Burton bur...@spinn3r.com wrote:
On Thu, May 7, 2015 at 6:05 AM, Tim Bain tb...@alumni.duke.edu wrote:
The other reason a List sometimes gets used is when you want
http://i.imgur.com/JyLrIZQ.png
here’s a screenshot of the total number of queues per server. It’s pretty
clear where I did the upgrade ;)
On Thu, May 7, 2015 at 11:58 AM, Kevin Burton bur...@spinn3r.com wrote:
On Thu, May 7, 2015 at 6:30 AM, Tim Bain tb...@alumni.duke.edu wrote:
I agree
On Thu, May 7, 2015 at 6:30 AM, Tim Bain tb...@alumni.duke.edu wrote:
I agree with your approach with the WeakRunnable; I think that will achieve
the goal without the performance hit of calling purge() after each
cancellation.
I went ahead with this solution and it seems to be working well
...@gmail.com wrote:
Nice! I like what you've done. I originally used ConcurrenthashMap - but
found it a bit of a hog, would be interested if you find different?
On 7 May 2015, at 19:58, Kevin Burton bur...@spinn3r.com wrote:
On Thu, May 7, 2015 at 6:30 AM, Tim Bain tb...@alumni.duke.edu
:14, Kevin Burton bur...@spinn3r.com wrote:
Let’s say you have a queue with 1M items.. they are all low priority.
Then
you add a high priority entry.
I believe, due to message cursors, that it won’t be executed until it’s
read into the “maxPageSize window”.
Is this correct or does
heard anyone say that the behavior is
different for any of the data store types), but I can't claim any firsthand
knowledge so if that's not right hopefully someone else will say so.
Tim
On Mon, May 4, 2015 at 12:14 PM, Kevin Burton bur...@spinn3r.com wrote:
Let’s say you have a queue
Let’s say you have a queue with 1M items.. they are all low priority. Then
you add a high priority entry.
I believe, due to message cursors, that it won’t be executed until it’s
read into the “maxPageSize window”.
Is this correct or does it depend on the underlying store?
KahaDB and LevelDB
applies here, but someone who knows the architecture of ActiveMQ
plugins should probably confirm that.
Tim
On May 2, 2015 3:05 PM, Kevin Burton bur...@spinn3r.com wrote:
OK. I’ve fixed 2-3 significant bugs in ActiveMQ with large numbers of
queues and degraded performance. Most of theses are O(N
I’m doing a bunch of performance analysis of ActiveMQ this weekend to see
if I can improve queue creation and destruction time. The good news is
that there are a lot of areas of optimization.
It looks like one is that advisory topics are created with the default
expireMessagesPeriod (which is
I’m sorry.. this is N^2 because the Timer just keeps growing for every new
queue and they need to be purged and it has to evaluate each one.
On Sat, May 2, 2015 at 1:40 PM, Kevin Burton bur...@spinn3r.com wrote:
Also, while this is a small performance boost in my example, this should
have
during this operation so no new queues can be created during a queue GC.
On Sat, May 2, 2015 at 12:23 PM, Kevin Burton bur...@spinn3r.com wrote:
I’m doing a bunch of performance analysis of ActiveMQ this weekend to see
if I can improve queue creation and destruction time. The good news
OK. I’ve fixed 2-3 significant bugs in ActiveMQ with large numbers of
queues and degraded performance. Most of theses are O(N^2) bugs so the more
queues you have the more this becomes VERY painful.
I don’t have an easy fix of this one though.
Queue creation right now is about 3x slower than it
And it looks like these changes, along with setting expireMessagesPeriod=0
on advisory topics, dramatically improves performance. On small numbers of
queues (5k) it’s 100x. In large numbers it will be even higher.
On Sat, May 2, 2015 at 3:08 PM, Kevin Burton bur...@spinn3r.com wrote:
I’ve
I’m confused by something. Why don’t messages pile up in advisory topics?
Topics only deliver messages to consumers who are actually listening I
assume?
On Sat, May 2, 2015 at 2:07 PM, Kevin Burton bur...@spinn3r.com wrote:
I’m sorry.. this is N^2 because the Timer just keeps growing for every
Wanted some feedback on this.
https://gist.github.com/burtonator/34a67c24ca9ce0574c04
I think I want to refactor the cancel method…
it calls purge() which is VERY expensive on large numbers of queues. N^2
expensive.
once the cancel() is called, the timer task won’t get executed, HOWEVER
I’ve found 3 places where CopyOnWriteArrayList was being used and causing
significant performance impact O(N^2) when using large numbers of queues.
Could I get feedback on these 3 changes?
https://github.com/spinn3r/activemq/commit/06ebfbf2a4d9201b57069644bdb7eb8274da0714
I don't think it's the network stack where that code works; I'm pretty sure
the message itself does decompression when the body is accessed via the
getter. But when you read the message body to serialize it to Chronicle,
you're likely to invoke that decompression code and end up undoing the
, but fixes are better. :) But it might be a way to
at least get production working again so you can solve this at a more
reasonable pace rather than spending 15-hour days on it.
Tim
On Fri, Apr 24, 2015 at 5:04 PM, Kevin Burton bur...@spinn3r.com wrote:
Sounds like a good idea. I just pushed
On Fri, Apr 24, 2015 at 2:54 PM, Tim Bain tb...@alumni.duke.edu wrote:
If you start from a zero-state (broker and all clients stopped) and attach
only one consumer with your artemis_priority = 9 selector, do you get
any messages to it?
If I restart a new broker, and start consuming messages,
to see why messages aren't
getting handed off to your consumer with the selector.
Tim
On Fri, Apr 24, 2015 at 4:27 PM, Kevin Burton bur...@spinn3r.com wrote:
http://imgur.com/a/2myja
What are the two screenshots; with and without the selector? If that's
right, then clearly zero
I’ve been working 15 hour days for the last 2-3 weeks trying to resolve
this so if this is somewhat incoherent it’s probably due to lack of sleep
:-P
I think we’re experiencing a bug in ActiveMQ which is VERY hard to
reproduce but happens regularly in our production setup.
I can’t reproduce it
Here are two screenshots of the JMX consumer stats, one without a selector
and one with a selector.. you can see the one with the selector just not
working.
[image: Inline image 1]
[image: Inline image 2]
On Fri, Apr 24, 2015 at 1:50 PM, Kevin Burton bur...@spinn3r.com wrote:
I’ve been
GC should definitely work.. do they topics have consumers? If they have ANY
consumers this will block GC and they won’t be able to be reclaimed.
Also, the time is reset for each new consumer.
Another note. If you have a LARGE number of topics that need to be GCd,
there’s a bug whereby ActiveMQ
over.
I think ActiveMQ should probably log an error when this happens.
On Fri, Apr 24, 2015 at 2:03 PM, Timothy Bish tabish...@gmail.com wrote:
On 04/24/2015 04:50 PM, Kevin Burton wrote:
I’ve been working 15 hour days for the last 2-3 weeks trying to resolve
this so if this is somewhat
On Fri, Apr 24, 2015 at 2:27 PM, Tim Bain tb...@alumni.duke.edu wrote:
If every message has at least one consumer for which the consumer's
selector matches the message, you'll eventually process every message.
That’s what I thought too, but that doesn’t work.
Consumers that have no
, though that will certainly
produce the behavior too.
Tim
On Fri, Apr 24, 2015 at 3:21 PM, Kevin Burton bur...@spinn3r.com wrote:
Literally JUST found this issue!
Is this documented anywhere? My issue is that there *is* no sparse
message
distribution. Every message has a value from
yes. it’s not very buggy/reliable.
What we did was to use activemq in embedded mode and used our own/internal
daemon infrastructure.
I guess the point is that activemq is WAY easier to embed than say
something like Cassandra or Elasticsearch.
So if you can easily make your own Java daemons I
On Mon, Apr 20, 2015 at 6:24 AM, Tim Bain tb...@alumni.duke.edu wrote:
I'm confused about what would drive the need for this.
Is it the ability to hold more messages than your JVM size allows? If so,
we already have both KahaDB and LevelDB; what does Chronicle offer that
those other two
I’ve been thinking about how messages are stored in the broker and ways to
improve the storage in memory.
First, right now, messages are stored in the same heap, and if you’re using
the memory store, like, that’s going to add up. This will increase GC
latency , and you actually need 2x more
)?
On Sunday, April 19, 2015, Kevin Burton bur...@spinn3r.com wrote:
Interesting. It’s already 1 in the connection configuration. I assume
you
mean queuePrefetch as it’s named differently in the destination policy.
On Sun, Apr 19, 2015 at 5:42 PM, Justin Reock
justin.re...@roguewave.com
Also, I”ve run with and without producer flow control and that also doesn’t
impact the situation.
On Sun, Apr 19, 2015 at 8:01 PM, Kevin Burton bur...@spinn3r.com wrote:
Here’s the public gist of our XML config. (it needs some comment cleanup
but that’s that we’re running with).
https
?
-Justin
On Apr 19, 2015 8:15 PM, Kevin Burton bur...@spinn3r.com wrote:
I’m totally stumped on this bug ….
Essentially, I have a queue that locks up and consumers in my main daemon
no longer consume messages from it.
It’s basically dead. If I restart my daemon, no more messages are
consumed
Here’s the public gist of our XML config. (it needs some comment cleanup
but that’s that we’re running with).
https://gist.github.com/burtonator/b5f4228b0f0acbf05b4e
We’re running 5.10.2 . I’ve reviewed the bugs fixed since then and nothing
seems to apply to our situation. I would upgrade but
with the larger queues.
On Sun, Apr 19, 2015 at 8:03 PM, Kevin Burton bur...@spinn3r.com wrote:
Also, I”ve run with and without producer flow control and that also
doesn’t impact the situation.
On Sun, Apr 19, 2015 at 8:01 PM, Kevin Burton bur...@spinn3r.com wrote:
Here’s the public gist of our
We just deployed the NIO connector as a test and it looks like it’s using
1/3rd the memory in our configuration vs the TCP connector.
I would definitely be happy with that outcome.. but we were using thread
pooling with the TCP connector and I didn’t see many threads being used.
What was the
On Thu, Apr 16, 2015 at 8:34 PM, Tim Bain tb...@alumni.duke.edu wrote:
If that was happening you'd see the DestinationViewMBean's ExpiredCount
increasing in the JMX counters, but only if the expiration was happening on
the broker; as far as I could tell, there's no stat that captures when
When we initially deployed activemq we took the normal route of using it
with an init.d and run the daemon like we do apache, cassandra, etc. Like
a daemon.
However, I found that we lean pretty hard on the actual implementation of
ActiveMQ and need to go above and beyond what ActiveMQ provides.
in the broker whenever PFC kicks in; we watched the
logs for that line and fire off an email to get someone to investigate.
Would that meet your needs?
On Apr 16, 2015 10:10 PM, Kevin Burton bur...@spinn3r.com wrote:
I’m looking at implementing producer flow control so that I don’t fill
On Fri, Apr 17, 2015 at 11:42 AM, Tim Bain tb...@alumni.duke.edu wrote:
Hmm, too bad pulling the obvious threads didn't yield anything. If you
start seeing this happen regularly, maybe you could run a few of the
clients that are likely to hit the problem with the debugging port open so
you
On Fri, Apr 17, 2015 at 11:53 AM, Tim Bain tb...@alumni.duke.edu wrote:
If you embed it in your app, then you lose the ability to cycle your app
without taking down your broker (which is a bad thing if you use
non-persistent messaging as we do).
Oh. to clarify. I built our own foo-activemq
It looks like redelivery variables work if you call session.rollback() …
but not if you just never send an acknowledgment when running with
CLIENT_ACKNOWLEDGE mode.
http://activemq.apache.org/redelivery-policy.html
It seems like it’s exactly 2000ms no matter if I set initialRedeliveryDelay
or
1 - 100 of 366 matches
Mail list logo