[ 
https://issues.apache.org/jira/browse/FLINK-24894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17444992#comment-17444992
 ] 

liuzhuo commented on FLINK-24894:
---------------------------------

This question also confuses me. When JobManager POD or TaskManager POD fails 
over, we need HA data to restore the task. If the Deployment is stopped, why is 
it considered that HA is required for restoration when it is restarted? I think 
whether it is `delete deployment` or `cancel job`, HA data should be cleared at 
the same time

> Flink on k8s, open the HA mode based on KubernetesHaServicesFactory ,When I 
> deleted the job, the config map created under the HA mechanism was not 
> deleted.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-24894
>                 URL: https://issues.apache.org/jira/browse/FLINK-24894
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>         Environment: 1.13.2 
>            Reporter: john
>            Priority: Major
>
> Flink on k8s, open the HA mode based on KubernetesHaServicesFactory. When I 
> deleted the job, the config map created under the HA mechanism was not 
> deleted. This leads to a problem: if my last concurrency was 100, changing to 
> 40 this time will not take effect. This can be understood because jobgraph 
> recovered from high-availability.storageDir and ignored the client's.
> My question is: When deleting a job, the config map created under the HA 
> mechanism is not deleted. Is this the default mechanism of HA, or is it a bug?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to