[ https://issues.apache.org/jira/browse/FLINK-34007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807229#comment-17807229 ]
Matthias Pohl edited comment on FLINK-34007 at 1/16/24 1:01 PM: ---------------------------------------------------------------- The timeout of the renew operation leads to a {{stopLeading}} call in [LeaderElector:96|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L119] which results in LeaderElector being stopped (the {{stopped}} flag is set to true in [LeaderElector:119|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L119] and never disabled again). Any later call of {{tryAcquireOrRenew}} will not perform any action because of the {{stopped}} flag being true (see [LeaderElector#tryAcquireOrRenew:211|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L211]). Flink doesn't re-instantiate the {{LeaderElector}} but calls {{LeaderElector#run}} on the same (now stopped) instance in the [KubernetesLeaderElectionDriver#notLeader|https://github.com/apache/flink/blob/11259ef52466889157ef473f422ecced72bab169/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesLeaderElectionDriver.java#L214] callback. That explains the behavior in the Flink 1.18 deployments, if I didn't miss anything. It sounds like a bug in the fabric8io k8s client implementation which we should be able to workaround by recreating a new LeaderElector on leadership loss. That also explains why restarting the JobManager fixes the issue. Because that results in a new LeaderElector being instantiated. I'm still puzzled about the issues in other Flink versions, though. was (Author: mapohl): The timeout of the renew operation leads to a {{stopLeading}} call in [LeaderElector:96|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L119] which results in LeaderElector being stopped (the {{stopped}} flag is set to true in [LeaderElector:119|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L119] and never disabled again). Any later call of {{tryAcquireOrRenew}} will not perform any action because of the {{stopped}} flag being true (see [LeaderElector#tryAcquireOrRenew:211|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L211]). Flink doesn't re-instantiate the {{LeaderElector}} but calls {{LeaderElector#run}} on the same (now) stopped instance in the [KubernetesLeaderElectionDriver#notLeader|https://github.com/apache/flink/blob/11259ef52466889157ef473f422ecced72bab169/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesLeaderElectionDriver.java#L214] callback. That explains the behavior in the Flink 1.18 deployments, if I didn't miss anything. It sounds like a bug in the fabric8io k8s client implementation which we should be able to workaround by recreating a new LeaderElector on leadership loss. That also explains why restarting the JobManager fixes the issue. Because that results in a new LeaderElector being instantiated. I'm still puzzled about the issues in other Flink versions, though. > Flink Job stuck in suspend state after losing leadership in HA Mode > ------------------------------------------------------------------- > > Key: FLINK-34007 > URL: https://issues.apache.org/jira/browse/FLINK-34007 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination > Affects Versions: 1.16.3, 1.17.2, 1.18.1, 1.18.2 > Reporter: Zhenqiu Huang > Priority: Major > Attachments: Debug.log, LeaderElector-Debug.json, job-manager.log > > > The observation is that Job manager goes to suspend state with a failed > container not able to register itself to resource manager after timeout. > JM Log, see attached > -- This message was sent by Atlassian Jira (v8.20.10#820010)