*bump*

On Dec 15, 2017 11:22 PM, "Lukasz Cwik" <lc...@google.com> wrote:

> +dev@beam.apache.org
>
> On Thu, Dec 14, 2017 at 11:27 PM, Sushil Ks <sushil...@gmail.com> wrote:
>
>> Hi Likasz,
>>            I am not sure whether I can reproduce in the DirectRunner, as
>> am taking retry and checkpoint mechanism of Flink into consideration. In
>> other words, the issue am facing is, any exception in the operation post
>> GroupByKey and the pipeline restarts, those particular elements are not
>> being processed in the next run.
>>
>> On Wed, Dec 13, 2017 at 4:01 AM, Lukasz Cwik <lc...@google.com> wrote:
>>
>>> That seems incorrect. Please file a JIRA and provide an example + data
>>> that shows the error using the DirectRunner.
>>>
>>> On Tue, Dec 12, 2017 at 2:51 AM, Sushil Ks <sushil...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>          I am running a fixed window with GroupByKey on FlinkRunner and
>>>> have noticed that any exception and restart before the GroupByKey operation
>>>> the Kafka consumer is replaying the data from the particular offset,
>>>> however, post that any exception occurs and the pipeline restart the Kafka
>>>> is consuming from the latest offset. Is this expected?
>>>>
>>>> Regards,
>>>> Sushil Ks
>>>>
>>>
>>>
>>
>

Reply via email to