zentol commented on a change in pull request #16941:
URL: https://github.com/apache/flink/pull/16941#discussion_r702794047



##########
File path: docs/content/docs/connectors/datastream/kafka.md
##########
@@ -543,4 +543,11 @@ It also can be circumvented by changing `retries` property 
in the producer setti
 However this might cause reordering of messages,
 which in turn if undesired can be circumvented by setting 
`max.in.flight.requests.per.connection` to 1.
 
+### ProducerFencedException
+
+The reason for this exception is most likely a transaction timeout on the 
broker side. With the implementation of
+[KAFKA-6119](https://issues.apache.org/jira/browse/KAFKA-6119), the 
`(producerId, epoch)` will be fenced off
+after a transaction timeout and all of its pending transactions are aborted 
(each `transactional.id` is
+mapped to a single `producerId`; this is described in more detail in the 
following [blog 
post](https://www.confluent.io/blog/simplified-robust-exactly-one-semantics-in-kafka-2-5/)).

Review comment:
       What I'm missing here is some form of conclusion for the user.
   
   Is it something that they should expect to happen regularly( and thus 
ignore)?
   Is there some config option in Flink/kafka-connector to mitigate it?
   Or is it entirely an issue on the Kafka side?
   
   I skimmed the blogpost, but couldn't quickly find an answer to that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to