Hi,
I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic
ELB's doing TCP load balancing. If I restart masters, from outside the
cluster the ELB does the right thing and takes a master out of service.
However, if something tries to talk to the kubernetes API inside the
cluster, it seems that kubernetes is unaware the master is missing, and I
get failures when I'm serially restarting masters.
Is there some way that I can point the kubernetes service to use the load
balancer? Maybe I should update the kubernetes endpoint object to use the
ELB IP address instead of the actual master addresses? Is this a valid
approach? Is there some way with openshift-ansible I can tell the
kubernetes service to use the load balancer when it creates the kubernetes
service?
Thanks,
Joel
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2018-06-27T06:30:50Z'
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: '45'
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: a224fd75-79d3-11e8-bd57-0a929ba50438
spec:
clusterIP: 172.30.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: dns
port: 53
protocol: UDP
targetPort: 8053
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 8053
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: '2018-06-27T06:30:50Z'
name: kubernetes
namespace: default
resourceVersion: '83743'
selfLink: /api/v1/namespaces/default/endpoints/kubernetes
uid: a22a0283-79d3-11e8-bd57-0a929ba50438
subsets:
- addresses:
- ip: 10.2.12.53
- ip: 10.2.12.72
- ip: 10.2.12.91
ports:
- name: dns
port: 8053
protocol: UDP
- name: dns-tcp
port: 8053
protocol: TCP
- name: https
port: 443
protocol: TCP
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users