Hi

Sorry for the late reply, I just got time to experiment today and,
realized forceStartOffsetTime
is not accepting timestamp(milli seconds) value as a parameter.

This doesn't seem to work. I'm using the kafka spout from storm-contrib,
and it is a normal storm topology not a trident topology!!

Regards
Chitra


On Mon, Feb 17, 2014 at 1:26 AM, Chitra Raveendran <
[email protected]> wrote:

> Yes I tried without the forced offset time parameter, but the topology
> stopped consuming messages.
>
> Regards
> Chitra
>  On Feb 17, 2014 1:24 AM, "P. Taylor Goetz" <[email protected]> wrote:
>
>> If turn off forceOffsetTime, it should resume from the last offset stored
>> in zookeeper.
>>
>> - Taylor
>>
>> On Feb 16, 2014, at 12:35 PM, Chitra Raveendran <
>> [email protected]> wrote:
>>
>> Hi
>>
>> So according to this logic I should set the timestamp parameter to the
>> value  when the topology was stopped ?
>>
>> But how do we identify the exact instance when the topology went down, so
>> that storm could start consuming from then ? Is it based on approximation ?
>> Or is there some concrete way to find the exact instance when the topology
>> was down?
>>
>> Is there any other parameter which is based on last offset and not time?
>>
>> Regards
>> Chitra
>> On Feb 16, 2014 11:00 PM, "Vinoth Kumar Kannan" <[email protected]>
>> wrote:
>>
>>> forceStartOffsetTime value can be -2, -1, or a time stamp in
>>> milliseconds
>>>
>>>
>>>    - -1 to read the latest offset of the topic
>>>    - -2 to read from the beginning.
>>>    - timestamp to read from a specific time
>>>
>>>
>>>
>>> On Sun, Feb 16, 2014 at 6:15 PM, Chitra Raveendran <
>>> [email protected]> wrote:
>>>
>>>> Any body ?? Any answers !!!
>>>>
>>>> I'm sure someone would have a work around !
>>>>
>>>> Please help :)
>>>> On Feb 15, 2014 12:11 AM, "Andrey Yegorov" <[email protected]>
>>>> wrote:
>>>>
>>>>> I have exactly the same question.
>>>>> I am using kafka spout from
>>>>> https://github.com/wurstmeister/storm-kafka-0.8-plus.git with kafka
>>>>> 0.8 release and ordinary (non-trident) storm topology.
>>>>>
>>>>> How can I guarantee processing of messages sent while topology was
>>>>> down or while e.g. storm cluster was down for maintenance?
>>>>>
>>>>> ----------
>>>>> Andrey Yegorov
>>>>>
>>>>>
>>>>> On Wed, Feb 12, 2014 at 8:05 AM, Danijel Schiavuzzi <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Hi Chitra,
>>>>>>
>>>>>> Which Kafka spout version are you exactly using, and what spout type
>>>>>> -- Trident or the ordinary Storm spout?
>>>>>>
>>>>>> I ask that because, unfortunately, there are multiple Kafka spout
>>>>>> versions around the web. According to my research, your best bet is the 
>>>>>> one
>>>>>> in storm-contrib in case you use Kafka version 0.7, and
>>>>>> storm-kafka-0.8-plus in case you use Kafka 0.8.
>>>>>>
>>>>>> Best regards,
>>>>>>
>>>>>> Danijel Schiavuzzi
>>>>>> www.schiavuzzi.com
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 12, 2014 at 8:42 AM, Chitra Raveendran <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> I have a topology in production which uses the default kafka spout,
>>>>>>> I have set this parameter
>>>>>>> *spoutConfig.forceStartOffsetTime(-1);*
>>>>>>>
>>>>>>> This parameter -1 helps me in such a way that, it consumes from the
>>>>>>> latest message, and doesn't start reading data from kafka right from the
>>>>>>> beginning (That would be unnecessary and redundant in my usecase).
>>>>>>>
>>>>>>> But in production, whenever a new release goes in, I stop and start
>>>>>>> the topology which would take a few seconds to minutes. I have been 
>>>>>>> loosing
>>>>>>> out on some data during the time that the topology is down.
>>>>>>>
>>>>>>> How can I avoid this. I have tried running without the
>>>>>>> ForcedOffsetTime parameter, but that did not work. What am I doing 
>>>>>>> wrong,
>>>>>>> how can I continue reading from the last offest ?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Chitra
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Danijel Schiavuzzi
>>>>>>
>>>>>
>>>>>
>>>
>>>
>>> --
>>> With Regards,
>>> Vinoth Kumar K
>>>
>>
>>

Reply via email to