[
https://issues.apache.org/jira/browse/FLINK-30518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17652503#comment-17652503
]
Gyula Fora commented on FLINK-30518:
------------------------------------
You are right [~tbnguyen1407] , those issues are not related, my mistake.
However the operator does not control how the flink-conf for jobs are mounted.
We simply use the Flink clients to deploy the cluster, which mounts the
configmap as part of the deployment. So this is not something that we can fix
in the operator.
cc [~bgeng777] [~yangwang166]
I don't remember this being a problem earlier. Is it possible that the
Kubernetes HA with native mode doesnt support multiple JM replicas?
> [flink-operator] Kubernetes HA not working due to wrong jobmanager.rpc.address
> ------------------------------------------------------------------------------
>
> Key: FLINK-30518
> URL: https://issues.apache.org/jira/browse/FLINK-30518
> Project: Flink
> Issue Type: Bug
> Components: Kubernetes Operator
> Affects Versions: kubernetes-operator-1.3.0
> Reporter: Binh-Nguyen Tran
> Priority: Major
> Attachments: flink-configmap.png
>
>
> Since flink-conf.yaml is mounted as read-only configmap, the
> /docker-entrypoint.sh script is not able to inject correct Pod IP to
> `jobmanager.rpc.address`. This leads to same address (e.g flink.ns-ext) being
> set for all Job Manager pods. This causes:
> (1) flink-cluster-config-map always contains wrong address for all 3
> component leaders (see screenshot, should be pod IP instead of clusterIP
> service name)
> (2) Accessing Web UI when jobmanager.replicas > 1 is not possible with error
> {code:java}
> {"errors":["Service temporarily unavailable due to an ongoing leader
> election. Please refresh."]} {code}
>
> ~ flinkdeployment.yaml ~
> {code:java}
> spec:
> flinkConfiguration:
> high-availability: kubernetes
> high-availability.storageDir: "file:///opt/flink/storage"
> ...
> jobManager:
> replicas: 3
> ... {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)