dmvk commented on a change in pull request #16941: URL: https://github.com/apache/flink/pull/16941#discussion_r702800445
########## File path: docs/content/docs/connectors/datastream/kafka.md ########## @@ -543,4 +543,11 @@ It also can be circumvented by changing `retries` property in the producer setti However this might cause reordering of messages, which in turn if undesired can be circumvented by setting `max.in.flight.requests.per.connection` to 1. +### ProducerFencedException + +The reason for this exception is most likely a transaction timeout on the broker side. With the implementation of +[KAFKA-6119](https://issues.apache.org/jira/browse/KAFKA-6119), the `(producerId, epoch)` will be fenced off +after a transaction timeout and all of its pending transactions are aborted (each `transactional.id` is +mapped to a single `producerId`; this is described in more detail in the following [blog post](https://www.confluent.io/blog/simplified-robust-exactly-one-semantics-in-kafka-2-5/)). Review comment: This should happen only when you're trying to restore from snapshot / checkpoint, that is older than transaction timeout (with `DeliveryGuarantee.EXACTLY_ONCE`). In that case transaction has been automatically aborted on Kafka Broker and you've most likely lost all of the uncommitted data. This is already mentioned above in the `DeliveryGuarantee.EXACTLY_ONCE` section. There are unfortunately no concrete steps user should follow here, so providing more context is probably the best we can do. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
