Replaying of tuples is done from the Spout and not done on a point to point
basis like Apache Flume.

Either a tuple is completely processed i.e. acked by every single bolt in
the pipeline or it's not; if it's not then it will be replayed by the Spout
(if the Spout implements a replay logic when the fail() method is called)

Acking has to be done at every stage of the pipeline.

Here's a post that will explain you the necessary details of the XoR
Ledger: https://bryantsai.com/fault-tolerant-message-processing-in-storm/

On Wed, Nov 30, 2016 at 9:10 PM, Navin Ipe <navin....@searchlighthealth.com>
wrote:

> Apart from the previous question, there's also the question of whether we
> should ack the tuple in execute() of Bolt1? Or is it just sufficient to ack
> it in Bolt2?
>
>
> On Wed, Nov 30, 2016 at 12:28 PM, Navin Ipe <navin.ipe@searchlighthealth.
> com> wrote:
>
>> Hi,
>>
>> Just need a confirmation for this topology:
>>
>> *Spout* ---emit---> *Bolt1* ---emit---> *Bolt2*
>>
>> Spout is BaseRichSpout. Bolt1 and Bolt2 are BaseRichBolt.
>> Spout emits just one tuple per nextTuple() call.
>> Bolt1 anchors to the tuple it received from Spout and emits many
>> different tuple objects.
>>
>> If any of the emits of Bolt1 fails, is there no way for Bolt 1 to re-emit
>> the tuple? Do I have to wait for the topology to figure out that one of
>> Bolt1's tuples failed and then do a re-emit from the Spout?
>>
>> --
>> Regards,
>> Navin
>>
>
>
>
> --
> Regards,
> Navin
>

Reply via email to