Thanks Gordon
But what if there is an uncaught exception in processing of the record (during 
normal job execution, after deserialization)?
After the restart strategy exceeds the failure rate, the job will fail and on 
re-run it would start at the same offset, right?
Is there a way to avoid this and 'automatically' start at offset+1?
Or maybe the recipe is to manually retrieve the record at partitionX/offsetY 
for the group and then restart?

Best regards
-Robert







 >-------- Оригинално писмо --------

 >От: "Tzu-Li (Gordon) Tai" tzuli...@apache.org

 >Относно: Re: FlinkKafkaConsumer010 does not start from the next record on
 startup from offsets in Kafka

 >До: user@flink.apache.org

 >Изпратено на: 22.11.2017 14:57



 
> Hi Robert,
 
> 
 
> As expected with exactly-once guarantees, a record that caused a Flink job
 
> to fail will be attempted to be reprocessed on the restart of the job.
 
> 
 
> For some specific "corrupt" record that causes the job to fall into a
 
> fail-and-restart loop, there is a way to let the Kafka consumer skip that
 
> specific "corrupt" record. To do that, return null when attempting to
 
> deserialize the corrupted record (specifically, that would be the
 
> `deserialize` method on the provided `DeserializationSchema`).
 
> 
 
> Cheers,
 
> Gordon
 
> 
 
> 
 
> 
 
> --
 
> Sent from: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Reply via email to