Hi,

We want to add a bolt to our topology that will consume tuples from an
upstream bolt and then call a service outside our topology to do some
external processing of that tuple.  Our concern is that the latency of that
call will cause us to lose tuples if they weren't queued up.

>From reading this article

http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/

it sounds like we can specify the queue depth for input tuples to a bolt.

However this solution on Stack Overflow

http://stackoverflow.com/questions/19510497/display-results-from-bolts-of-a-storm-cluster-on-browser/19512373#19512373

specifies we should consider putting a queue like ActiveMQ or Kafka between
our Storm bolts.

Is tuple queuing something we need to be concerned with?  If so, which
solution is more scalable?

If someone has done this, can you point me to an example?

Regards,

Pete

-- 
Pete Carlson
Software Developer
Tetra Concepts LLC

Reply via email to