On 7/4/06, nlif <[EMAIL PROTECTED]> wrote:

Thanks James. This is very valuable input.

Consumer prefetch - this is a great tip. I was unware of this ActiveMQ
feature. However, at this point, consumer memory is NOT my problem, since
each consumer will recieve one copy of the large message, and they should be
able to handle it.

The chances are a consumer could receive many messages (unless you
only send a message to one consumer then wait for a message response
before ever sending again).

My problem is at the producer side, because it has to
duplicate the messages.

FWIW using composite destinations you can send a single message to
multiple destinations.

http://incubator.apache.org/activemq/composite-destinations.html

this avoids the duplication on the producer but just pushes the
problem back onto the broker.


Queue vs Topic - From your reply I understand that while using multiple
queues will indeed result in message duplication - using a topic will not.
And if using a durable topic will also take care of a scenario in which not
all of the consumers are reachable at the time of sending, while not costing
me in memory - then this may be a viable solution.  (you say that durable
topics use the disk, right?)

Yes - they use a RAM cache which can be evicted and reloaded from disk.


The reason I think I have to use selectors when using a topic, is because I
sometime want to send something to some, but not all, of my consumers, and
the recipients are determined dynamically. For example, I may need to send
message A to consumers 1,3,5 and then message B to consumers 1,3,4. The only
way I can think of is using a single Topic, adding a recipient list to the
message header, and adding selectors to the consumers. Is there any other
way? I would be happy to know.

That sounds fine to me. You'd get the benefits of durable topics but
the effect of kinda logical queues per consumer.


ActiveMQ JMS Streams - Thanks for this tip! Does this mean that even when
sending a huge file, at any given moment only a small portion of it is in
memory? If so - this can be very helpful to us. Great feature!

Yes! Basically any massive file is split up into chunks (by default of
64K) so that any client (producer/consumer) or broker only has to keep
a few messages in RAM at any point in time so arbitrarily large files
can be exchanged using small amounts of RAM.


In fact, one of the things I considered for handling very large messages,
was splitting them to smaller ones, sending them separately, and then using
an Aggregator on the consumer side (I read about this is Gregor Hohpe's
excellent "Enterprise Integration Patterns").

Thats pretty much how JMS Streams work :)

However, it seems to me that
when running in a cluster - an aggregator cannot work, since parts of the
message can end up in different machines on the cluster. Is there any way to
split a message to smaller chunks and then ensure they all get to the same
place, even in a cluster?

Yes - JMS Streams :)

--

James
-------
http://radio.weblogs.com/0112098/

Reply via email to