[ 
https://issues.apache.org/jira/browse/FLINK-29545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17620377#comment-17620377
 ] 

Piotr Nowojski commented on FLINK-29545:
----------------------------------------

Huh. On the screenshots that you attached [~zhoujira86], the source shows as 
"Custom Source", which suggests it wasn't Kafka? Are those screenshots from a 
different job?

If the problem also happens with KafkaSource, I wouldn't expect a bug there 
(but who knows). Anyway I would still point you towards the same thing that I 
wrote before:
{quote}
Nevertheless I would dig deeper why, in this screen shot (below), those 4 
subtasks haven't finished the checkpoint. It looks like they might have 
deadlocked. You could for example show thread dump from a task manager that is 
running one of those source subtasks (and tell us what is the name/subtask id 
of the problematic subtask).
{quote}

> kafka consuming stop when trigger first checkpoint
> --------------------------------------------------
>
>                 Key: FLINK-29545
>                 URL: https://issues.apache.org/jira/browse/FLINK-29545
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Checkpointing, Runtime / Network
>    Affects Versions: 1.13.3
>            Reporter: xiaogang zhou
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: backpressure 100 busy 0.png, task acknowledge na.png, 
> task dag.png
>
>
> the task dag is like attached file. the task is started to consume from 
> earliest offset, it will stop when the first checkpoint triggers.
>  
> is it normal?, for sink is busy 0 and the second operator has 100 backpressure
>  
> and check the checkpoint summary, we can find some of the sub task is n/a.
> I tried to debug this issue and found in the 
> triggerCheckpointAsync , the 
> triggerCheckpointAsyncInMailbox took  a lot time to call
>  
>  
> looks like this has something to do with 
> logCheckpointProcessingDelay, Has any fix on this issue?
>  
>  
> can anybody help me on this issue?
>  
>  
>  
>  
> thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to