[ 
https://issues.apache.org/jira/browse/FLINK-28099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-28099.
----------------------------------
    Resolution: Information Provided

[~Krjk] Thanks for opening a ticket. The error is caused because there are 
records that are put faster in the queue then they can be sent from the client. 
You can find a detailed explanation and solution in 
https://stackoverflow.com/questions/56807188/how-to-fix-kafka-common-errors-timeoutexception-expiring-1-records-xxx-ms-has

The implementation details of the KafkaSink differs quite a lot from the 
FlinkKafkaProducer. 

For future questions, please check https://flink.apache.org/community.html on 
how to get help. Jira tickets are meant for bugs or new features, not for 
support questions. Those are better suited for the User mailing lists, Slack or 
Stackoverflow. 

> KafkaSink timeout exception
> ---------------------------
>
>                 Key: FLINK-28099
>                 URL: https://issues.apache.org/jira/browse/FLINK-28099
>             Project: Flink
>          Issue Type: Bug
>         Environment: Flink 1.14.3
>            Reporter: Marina
>            Priority: Major
>
> Hello!
> I'm trying to replace FlinkKafkaProducer with KafkaSink, but jobs fail with 
> this exception:
> Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 10 
> record(s) for enriched_flow-21:120015 ms has passed since batch creation
> so I have two questions:
> 1) Why jobs worked fine with FlinkKafkaProducer and fail with KafkaSink with 
> the same configuration?
> 2) How to make that flink jobs won't fail with that exception? We use 
> DeliveryGuarantee.NONE because we don't have checkpoints yet, so its ok if 
> there will be some data loss
> Thanks in advance!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to