[
https://issues.apache.org/jira/browse/FLINK-29545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17614698#comment-17614698
]
xiaogang zhou commented on FLINK-29545:
---------------------------------------
I tried to get some info when block happen
it's
"Source: Custom Source (10/40)#0" Id=67 BLOCKED on java.lang.Object@6f54b364
owned by "Legacy Source Thread - Source: Custom Source (10/40)#0" Id=81
at
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
- blocked on java.lang.Object@6f54b364
at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:344)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:330)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:202)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639)
at
org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$388/2041732781.run(Unknown
Source)
> kafka consuming stop when trigger first checkpoint
> --------------------------------------------------
>
> Key: FLINK-29545
> URL: https://issues.apache.org/jira/browse/FLINK-29545
> Project: Flink
> Issue Type: Bug
> Components: Runtime / Checkpointing, Runtime / Network
> Affects Versions: 1.13.3
> Reporter: xiaogang zhou
> Priority: Critical
> Attachments: backpressure 100 busy 0.png, task acknowledge na.png,
> task dag.png
>
>
> the task dag is like attached file. when the task is started to consume from
> earliest offset, it will stop when the first checkpoint triggers.
>
> is it normal?, for sink is busy 0 and the second operator has 100 backpressure
>
> and check the checkpoint summary, we can find some of the sub task is n/a.
> I tried to debug this issue and found in the
> triggerCheckpointAsync , the
> triggerCheckpointAsyncInMailbox took a lot time to call
>
>
> looks like this has something to do with
> logCheckpointProcessingDelay, Has any fix on this issue?
>
>
> can anybody help me on this issue?
>
> thanks
--
This message was sent by Atlassian Jira
(v8.20.10#820010)