[
https://issues.apache.org/jira/browse/FLINK-29711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17622124#comment-17622124
]
Durgesh Mishra edited comment on FLINK-29711 at 10/21/22 8:36 AM:
------------------------------------------------------------------
Hello [~martijnvisser] We created one flink job and one Azure eventhub. Flink
is processing real time data which published to Azure event hub and after
continually running of flink job for about 10 hours above exception occurs.
Used following configuration.
# Checkpoints configurations
checkpoints.interval= 240000
checkpoints.minPauseBetweenCheckpoints= 120000
checkpoints.timeout= 110000
# Common Flink-Kafka-Connector(Source and Sink) configurations
allow.auto.create.topics=false
auto.offset.reset=latest
request.timeout.ms=60000
transaction.timeout.ms=60000
kafka.semantic=1
kafka.internalProducerPoolSize=5
# For reducing the kafka timeout
max.block.ms=5000
# For increasing the metadata fetch time
metadata.max.idle.ms= 180000
was (Author: JIRAUSER297316):
Hello [~martijnvisser] We created one flink job and one Azure eventhub. Flink
is processing real time data is published to Azure event hub and after
contionusly running of flink job for about 10 hours above exception occurs.
Used following configuration.
# Checkpoints configurations
checkpoints.interval= 240000
checkpoints.minPauseBetweenCheckpoints= 120000
checkpoints.timeout= 110000
# Common Flink-Kafka-Connector(Source and Sink) configurations
allow.auto.create.topics=false
auto.offset.reset=latest
request.timeout.ms=60000
transaction.timeout.ms=60000
kafka.semantic=1
kafka.internalProducerPoolSize=5
# For reducing the kafka timeout
max.block.ms=5000
# For increasing the metadata fetch time
metadata.max.idle.ms= 180000
> Topic notification not present in metadata after 60000 ms.
> ----------------------------------------------------------
>
> Key: FLINK-29711
> URL: https://issues.apache.org/jira/browse/FLINK-29711
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Kafka
> Affects Versions: 1.14.4, 1.14.6
> Reporter: Durgesh Mishra
> Priority: Major
>
> Failed to send data to Kafka null with
> FlinkKafkaInternalProducer\{transactionalId='null', inTransaction=false,
> closed=false}
> at
> org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.throwException(KafkaWriter.java:405)
> at
> org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.lambda$onCompletion$0(KafkaWriter.java:391)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
> at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
> at
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:353)
> at
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:317)
> at
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
> at
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> at java.base/java.lang.Thread.run(Unknown Source)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)