Github user pnowojski commented on a diff in the pull request:

    https://github.com/apache/flink/pull/4910#discussion_r147655000
  
    --- Diff: 
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
 ---
    @@ -922,6 +945,22 @@ private void readObject(java.io.ObjectInputStream in) 
throws IOException, ClassN
                producersPool = new ProducersPool();
        }
     
    +   /**
    +    * Disables the propagation of exceptions thrown when committing 
presumably timed out Kafka
    +    * transactions during recovery of the job. If a Kafka transaction is 
timed out, a commit will
    +    * never be successful. Hence, use this feature to avoid recovery loops 
of the Job. Exceptions
    +    * will still be logged to inform the user that data loss might have 
occurred.
    +    *
    +    * <p>Note that we use {@link System#currentTimeMillis()} to track the 
age of a transaction.
    +    * Moreover, only exceptions thrown during the recovery are caught, 
i.e., the producer will
    +    * attempt at least one commit of the transaction before giving up.</p>
    +    */
    +   @Override
    +   public FlinkKafkaProducer011<IN> 
disableFailurePropagationAfterTransactionTimeout() {
    --- End diff --
    
    nit: Please move this method to the top, somewhere around 
`setLogFailuresOnly`


---

Reply via email to