reproduced - will fix shortly
On 23 Jan 2007, at 07:26, James Strachan wrote:
I thought that the new spool-to-disk feature should kick in
automatically - obviously not :). Anyone know if that has to be
explicitly enabled?
BTW are you explicitly disabling persistence on ActiveMQ? I think you
might need to keep it enabled for spooling to disk to work.
On 1/22/07, Albert Strasheim <[EMAIL PROTECTED]> wrote:
Hello all
On Mon, 22 Jan 2007, James Strachan wrote:
> FWIW I"d definitely recommend you use non-persistent sending on
your
> producers (otherwise the spool to disk won't kick in). Also are you
> actually ack'ing your messages?
First off, we're using ActiveMQ 4.2 from trunk.
I've taken a closer look at our application and I think I'm
beginning to
understand what's going on.
We have a non-persistent producer sending to a non-durable consumer
through a topic. Sessions are set to auto-acknowledge We're seeing
issues when running our test code, which uses an embedded broker,
but I
think these issues will also crop up with a separate broker.
The producer is sending relatively large messages (1--2 kB) as
fast as
the data they contain can be read from disk. The consumer
subscribed to
the topic does a lot of processing on these messages (i.e. the
consumer is slow) before sending a few smaller messages to another
topic. What usually happens is that this send blocks, due to
MemoryUsagePercent being 100. Since the consumer blocks while
trying to
send the small message, it no longer consumes any of the large
messages
and our application stops.
The same thing happens even when the consumer simply consumes the
large
message and puts it into a Java queue serviced by another thread. The
rapidly produced big messages fill up the broker's memory, causing
the
thread to block when trying to send a small message after taking a
big
message from the queue.
In our case, we definately don't want to drop messages if the
consumer
can't keep up. Instead, we want the the fast producer to block,
without
using up all the broker's resources, preventing the slow consumers
from
actually handling the messages they receive (which involves sending
another message).
I seem to recall that I read somewhere in the mailing list
archives or
in one of the JIRA issues about a per-destination buffer. It seems
like
this would solve our problem: as long as there's a bit of space
for the
slow consumers to send out their messages, they can keep going and
the
producer can send new big messages as space becomes available.
Should this be happening already? Are we doing something wrong?
Any comments appreciated.
Cheers,
Albert
--
James
-------
http://radio.weblogs.com/0112098/