Zhu Zhu commented on FLINK-16728:

Hi [~lilyevsky], it is intentioned to shutdown a TaskManager if task 
cancellation can not finish within timeout. This would trigger a failure and 
force the job to recovery from it rather than get stucked on it. So what matter 
is actually why the task are stucked, both in data processing and in task 

>From the log you attached, I think the deeper stacks of the blocked the the 
>tasks are:


So looks the task is blocked in elasticsearch flushing and thus does not 
respond to task cancellation request.

> Taskmanager dies after job got stuck and canceling fails
> --------------------------------------------------------
>                 Key: FLINK-16728
>                 URL: https://issues.apache.org/jira/browse/FLINK-16728
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: 1.10.0
>            Reporter: Leonid Ilyevsky
>            Priority: Major
>         Attachments: taskmanager.log.20200323.gz
> At some point I noticed that a few jobs got stuck (they basically stopped 
> processing the messages, I could detect this watching the expected output), 
> so I tried to cancel them.
> The cancel operation failed, complaining that the job got stuck at 
> StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.run(StreamTaskActionExecutor.java:86)
> and then the whole taskmanager shut down.
> See the attached log.
> This is actually happening practically every day in our staging environment 
> where we are testing Flink 1.10.0.

This message was sent by Atlassian Jira

Reply via email to