Definitely the best approach would be to use the message spooling -
which is available in 4.2 (though I'm hoping that the 4.2 release
will in fact be renamed the 5.0 release).
There should be some milestone releases for 4.2 available in the next
couple of weeks
cheers,
Rob
On 29 Dec 2006, at 11:45, drvillo wrote:
Hi all
I have been trying to get a stable configuration for over a month
now, but
I keep encountering a variety of issues[1] that reduces everything
to an
unutilizable
system.
I have now built a simpler system, which perhaps exposes the same
behavior,
it is just a pipeline of two queues
seeds--->q1--->L1--->q2--->L2
what happens is that when L1 receives a message it sends a number of
messages to q2.
I run tests changing the number of messages to be sent to q1 and
the number
of
messages to be forwarded by L1 to q1 (toSend and toForward
respectively)
A first scenario representing a real use case is having toSend
small (1-10)
and toForward
big(10k-100k)
A first observation is that if using kaha persistence everything
eventually
stops, when the broker
starts to be too full[2] of messages, having the attached dump.
Thus I'm
using the default
jdbc persistence.
A general behavior that I've seen is that if toSend is big (10k)
then no
flow control
seems to take place, as L2 eventually stops receiving messages when
the
broker
runs out of memory (when not using an usagemanageer), or the heap
space is
exahusted
(when setting it to something like 1Gb).
In this case I'd expect that sends to L1 are slowed down, to allow
L2 to
consume the (more) messages
that are arriving. but this doesn't happen, seeds are kept being
sent at the
same rate.
On the other hand having toSend<<toForward makes all the messages
to be
delivered,
but after some time everything slows awfully down (1msg/second
maybe...),
mainly because
the jvm is garbage collecting all the time (the heap space is being
all used
up).
Basically I need some way to slow down the first producer so that
the whole
system is not flooded
with messages that then I can barely consume.
My question then is, how should I configure the destination and broker
memory limits, and eventually
the prefetch values so that I don't either run out of memory or end
up with
a frozen system?
Setting higher memory limits causes the JVM to need a bigger heap
space
(which makes it less stable
in uncontrolled environments) but keeping them low seems that
prevents the
broker
to send more messages at some point (again [1]). On the other side,
being
everything in the
same VM, I don't know if it's better to set prefetch limits higher
or lower,
as pending messages
have to be stored somewhere, either on the broker or on the
consumer...
Reading about slow consumers[3] doesn't point me to any option, I
can't
discard messaging.
So I end up considering implementing a sort of timing between
sends, or
waiting for
message spooling, which I read should come with 4.2...
Sorry for the long message, but it's hard to express all of the
above, there
are probably too
many options to be considered.
Any help, guideline or hint would be most apreciated, it seems that
this
project is not
going to be released ever:(
For the interested is http://www.ripe.net/info/stats/hostcount/
hostcount++/
Thanks everybody,
Francesco
[1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
[2]: I haven't got any measure for too full, but I'd say something
like 800k
messages,
with an heap space of 128Mb
[3]: http://activemq.org/site/slow-consumer-handling.html
http://www.nabble.com/file/5167/kahadump.log kahadump.log
--
View this message in context: http://www.nabble.com/Flow-control-on-
a-recursive-system%2C-guidelines--tf2894291.html#a8086394
Sent from the ActiveMQ - User mailing list archive at Nabble.com.