[ 
https://issues.apache.org/jira/browse/FLINK-30518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17652520#comment-17652520
 ] 

Biao Geng commented on FLINK-30518:
-----------------------------------

[~gyfora] I see. Thanks for the information. I misunderstood the problem 
somehow :(
But I just tried 1.3.0 operator and 1.16 flink to run 
basic-checkpoint-ha-example with setting replicas of JM to 3. It works fine as 
well.
[~tbnguyen1407] would you mind sharing the full deployment yaml for this 
problem?       
 

> [flink-operator] Kubernetes HA not working due to wrong jobmanager.rpc.address
> ------------------------------------------------------------------------------
>
>                 Key: FLINK-30518
>                 URL: https://issues.apache.org/jira/browse/FLINK-30518
>             Project: Flink
>          Issue Type: Bug
>          Components: Kubernetes Operator
>    Affects Versions: kubernetes-operator-1.3.0
>            Reporter: Binh-Nguyen Tran
>            Priority: Major
>         Attachments: flink-configmap.png, screenshot-1.png
>
>
> Since flink-conf.yaml is mounted as read-only configmap, the 
> /docker-entrypoint.sh script is not able to inject correct Pod IP to 
> `jobmanager.rpc.address`. This leads to same address (e.g flink.ns-ext) being 
> set for all Job Manager pods. This causes:
> (1) flink-cluster-config-map always contains wrong address for all 3 
> component leaders (see screenshot, should be pod IP instead of clusterIP 
> service name)
> (2) Accessing Web UI when jobmanager.replicas > 1 is not possible with error
> {code:java}
> {"errors":["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]} {code}
>  
> ~ flinkdeployment.yaml ~
> {code:java}
> spec:
>   flinkConfiguration:
>     high-availability: kubernetes
>     high-availability.storageDir: "file:///opt/flink/storage"
>     ...
>   jobManager:
>     replicas: 3
>   ... {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to