Thanks for the updates and testing efforts on this!

I’m sorry that I currently haven’t found the change to look closely into the 
testing scenarios you’ve listed, yet.
But please keep us updated on this thread after testing it out also with the 
Cloudera build.

One other suggestion for your test to make sure that some failed record is 
actually retried: you can add a dummy verifying operator right before the Kafka 
sink.
At least that way you should be able to eliminate the possibility that the 
Kafka sink is incorrectly ignoring failed records when checkpointing. From 
another look at the Kafka sink code, I’m pretty sure this shouldn’t be the case.

Many thanks,
Gordon

On 4 June 2017 at 2:14:40 PM, ninad (nni...@gmail.com) wrote:

I tested this with the standalone cluster, and I don't see this problem. So,  
the problem could be that we haven't built Flink against cloudera Hadoop? I  
will test it out.  



--  
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Fink-KafkaProducer-Data-Loss-tp11413p13477.html
  
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.  

Reply via email to