[ 
https://issues.apache.org/jira/browse/FLINK-26916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17514555#comment-17514555
 ] 

David Morávek commented on FLINK-26916:
---------------------------------------

Hi, I think this whole thing boils down to how the LAST_STATE upgrade is 
implemented. Few observations:
- The operator is not shutting down the cluster properly. It simply deletes the 
underlying k8s resources.`FlinkUtils#deleteCluster(java.lang.String, 
java.lang.String, io.fabric8.kubernetes.client.KubernetesClient, boolean)`
- The whole implementation seems more or less like a JM failover scenario. 
Basically from the Flink standpoint JM disappears for no obvious reason, which 
leaves job in a "SUSPENDED" state. This also implies that all the HA data must 
remain untouched so Flink can restore to the previous state.
- This code path is not designed for application upgrades. Upgrade should be 
always done by a new job submission.
- ClusterId shouldn't be reused. This guarantee is needed for example for 
implementing reliable cluster shutdown. (see JobId semantics section of 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726435)

Could this be addressed by using native savepoints instead?

> The Operator ignores job related changes (jar, parallelism) during last-state 
> upgrades
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-26916
>                 URL: https://issues.apache.org/jira/browse/FLINK-26916
>             Project: Flink
>          Issue Type: Bug
>          Components: Kubernetes Operator
>    Affects Versions: kubernetes-operator-0.1.0, kubernetes-operator-1.0.0
>            Reporter: Matyas Orhidi
>            Assignee: Yang Wang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: kubernetes-operator-0.1.0
>
>
> RC: The old jobgraph is being reused when resuming



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to