[
https://issues.apache.org/jira/browse/FLINK-34557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825893#comment-17825893
]
tanliang commented on FLINK-34557:
----------------------------------
hi [~mapohl] ,thanks for your comment, I'm sorry for not replying to you for so
long.
The jobmanager log, like other tasks, can be 100% replicated in some scenarios.
For example, you can submit a WordCount task through application mode, and then
use the ‘ Yarn application - kill ’ command to kill the task. At this point,
the remaining znode in zk will not be deleted, and files in two different
folders on hdfs will also have residues. Similarly, you can also proactively
create situations where jar packages are missing or dependency conflicts occur.
For example, when Flink Histroy submits a simple SQL task like WordCount
through application mode, several jar packages related to the table are
intentionally missing. In this case, the task will not run continuously after
being submitted to the Yarn cluster, and Yarn will repeatedly pull it up, but
the last attempt will still fail to end due to package shortage. After the task
ends, the situation where Znode and files are not cleaned up will also occur。
For the phenomenon of residual Znode and HDFS files not being cleaned up, if
the task runs normally and ends, or if the task ends abnormally during the
client submission phase, these data nodes or HDFS files will be directly
deleted because the Flink framework takes these into consideration. But in the
two scenarios I mentioned, residual situations may occur, so I think this
aspect should also need improvement. There should not be any issues with the
Znode and HDFS files generated by Flink remaining after the final task is
completed
> When the Flink task ends in application mode, there may be issues with the
> Znode and HDFS files not being deleted
> -----------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-34557
> URL: https://issues.apache.org/jira/browse/FLINK-34557
> Project: Flink
> Issue Type: Improvement
> Components: Deployment / YARN, Runtime / Task
> Affects Versions: 1.17.0, 1.16.2
> Reporter: tanliang
> Priority: Major
> Attachments: image-2024-03-01-15-38-48-396.png,
> image-2024-03-01-15-39-13-953.png, image-2024-03-01-15-39-39-524.png
>
>
> In Flink 1.16.2, we all use application mode to submit tasks to Yarn.
> However, there are several situations during use that result in Znode not
> being deleted and some files on HDFS not being deleted. These should be
> deleted after the task is stopped, otherwise it may cause some resource
> occupancy problems. Below are the several situations I have encountered:
> # After the Flink task is submitted to the cluster, if there is a conflict
> or missing jar package, the task will be restarted multiple times by Yarn and
> ultimately fail to end. At this point, it will be found that the Znode
> persists, and there are files with corresponding appids in the '/.flink'
> directory and '/flink/recovery' directory in HDFS;
> # When using the yarn kill command to kill a task, the task ends directly
> and the final state is killed, with the final result being the same as the
> first one;
> # When the Flink task is disconnected from zk (we will not analyze the
> specific reason for the disconnection), if zk is disconnected from the jm
> container, the task will hang and be pulled back by yarn. When the last
> disconnection occurs, the task will eventually end and the same result as
> above will appear;
> !image-2024-03-01-15-38-48-396.png|width=877,height=171!
> !image-2024-03-01-15-39-13-953.png|width=882,height=174!
>
> !image-2024-03-01-15-39-39-524.png|width=1001,height=67!
>
> *Add:*
> Through consulting with the community and other colleagues, we found that the
> community had previously raised the issue of Znode not being deleted. Later,
> by adding the closeAndCleanupAllData# method, it was uniformly deleted at the
> end of a highly available cluster. However, in the aforementioned situations,
> there are still issues with file and data residue. Among them, when using the
> yarn kill command, after successfully submitting a task to the cluster, Flink
> would indicate through console logs that there would be HDFS file residue
> after successfully submitting tasks to the cluster, however, I don't
> understand why the community did not improve this and instead retained the
> existence of this situation. At the same time, we believe that Znode residue
> should not exist, regardless of the task status, it must be cleaned up after
> stopping the task
--
This message was sent by Atlassian Jira
(v8.20.10#820010)