You can read about storm replaying mechanism here, the fail(Object msgId) will
be invoked when your tuple's timedout from the spout, the replaying
mechanism should be implemented there
http://storm.incubator.apache.org/documentation/Concepts.html



On Thu, Jun 5, 2014 at 9:24 AM, Nhan Nguyen Ba <[email protected]>
wrote:

> Have you enabled the acker and override the fail() method on your Spout?
>
>
> On Thu, Jun 5, 2014 at 9:15 AM, 傅駿浩 <[email protected]> wrote:
>
>>  Hi, all
>>
>> I do the simple topology in distributed mode as follows:
>> Spout:  collector.emit(new Values(index)); //where index is a integer
>> from 1,2,......to 100000, each nextTuple will emit the next number
>> Bolt: LOG.info("I'm Bolt, index is " + tuple.getInteger(0)) //write the
>> number this bolt gets to my log
>>
>> As topology starts, I'll see...
>> I'm Bolt, index is 1
>> I'm Bolt, index is 2
>> I'm Bolt, index is 3
>> .
>> .
>> .
>> I'm Bolt, index is 45656
>> =============now, I kill this worker who is working for this bolt========
>> kill -9 [pid]
>> ========================================================
>> I find supervisor will reassign the bolt task to a new worker of new port
>> for several seconds, but it'll miss a lot of data. In other words, the new
>> worker's log we can see only from "I'm Bolt, index is 55234" for example
>> instead of index from 45657. In this situation, if my node dies, storm
>> can't replay the tuple again so that my bolt gets something muss? Thanks
>> for your answering!
>>
>> James Fu
>>
>>
>

Reply via email to