Thanks for reporting the issue. I think this warrants a 0.9.2 release after the 
fix is in.

– Ufuk

> On 10 Sep 2015, at 16:52, Gwenhael Pasquiers 
> <gwenhael.pasqui...@ericsson.com> wrote:
> 
> Thanks,
>  
> In the mean time we’ll go back to 0.9.0 J
>  
> From: Robert Metzger [mailto:rmetz...@apache.org] 
> Sent: jeudi 10 septembre 2015 16:49
> To: user@flink.apache.org
> Subject: Re: Flink 0.9.1 Kafka 0.8.1
>  
> Hi Gwen,
>  
> sorry that you ran into this issue. The implementation of the Kafka Consumer 
> has been changed completely in 0.9.1 because there were some corner-case 
> issues with the exactly-once guarantees in 0.9.0.
>  
> I'll look into the issue immediately.
>  
>  
> On Thu, Sep 10, 2015 at 4:26 PM, Gwenhael Pasquiers 
> <gwenhael.pasqui...@ericsson.com> wrote:
> Hi everyone,
>  
> We’re trying to use consume a 0.8.1 Kafka on Flink 0.9.1 and we’ve run into 
> the following issue :
>  
> My offset became OutOfRange however now when I start my job, it loops on the 
> OutOfRangeException, no matter what the value of auto.offset.reset is… 
> (earliest, latest, largest, smallest)
>  
> Looks like it doesn’t fix the invalid offset and immediately goes into error… 
> Then Flink restarts the job, and failes again … etc …
>  
> Do you have an idea of what is wrong, or could it be an issue in flink ?
>  
> B.R.
>  
> Gwenhaël PASQUIERS

Reply via email to