[ 
https://issues.apache.org/jira/browse/FLINK-26916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17514568#comment-17514568
 ] 

Yang Wang commented on FLINK-26916:
-----------------------------------

For safety and compatibility, I agree with you that upgrade should be done by a 
new job submission, especially when the Flink version is changed. However, if 
the continuously increased checkpoint could be easily obtained by external 
tools(e.g. flink-kubernetes-operator), I also lean to not simply delete the 
Flink cluster and have HA data retained. I do not find a more appropriate way.

Note: We could not get the latest checkpoint from the RestAPI since JobManager 
might crash backoff when we want to upgrade a Flink application.

 

I am afraid I cannot agree with you about that clusterId shouldn't be reused. 
Users just need to do the manual clean-up for job result store if they want to 
reuse the same cluster-id.

 

I am not familiar with native savepoint and will have a look about this 
solution.

> The Operator ignores job related changes (jar, parallelism) during last-state 
> upgrades
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-26916
>                 URL: https://issues.apache.org/jira/browse/FLINK-26916
>             Project: Flink
>          Issue Type: Bug
>          Components: Kubernetes Operator
>    Affects Versions: kubernetes-operator-0.1.0, kubernetes-operator-1.0.0
>            Reporter: Matyas Orhidi
>            Assignee: Yang Wang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: kubernetes-operator-0.1.0
>
>
> RC: The old jobgraph is being reused when resuming



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to