Hi Everyone,

I have a topology that, for every tuple coming in from a KafkaSpout the
following occurs:

1. processing in Bolt A generates approximately 20 tuples that are anchored
and sent to Bolt B
2. each anchored, processed tuple is cached in Bolt B and eventually
written to a NoSQL database

Question--since there are roughly 20 tuples to be acked before the anchor
tuple coming from Kafka is acked, and there will be some form of delay for
each, is this a scenario that guaranteed message processing in Storm is
designed to handle? Are there any best practices that may make this process
more efficient or more easily fault tolerant, or is Storm good-to-go as-is?

Any guidance or thoughts would be great--thanks! :)

--John

Reply via email to