Re: How to make 172.30.0.1 (kubernetes service) health checked?

2018-09-10 Thread Clayton Coleman
Masters should actually remove themselves, but maybe that regressed.  I’ll
try to take a look but removing themselves when they receive a sigterm is a
good idea.

On Sep 10, 2018, at 6:58 PM, Joel Pearson 
wrote:

Hi Clayton,

Sorry for the extensive delay, but I’ve been thinking about this more and
I’m wondering if it’s safe to remove a master from the endpoint just before
restarting it (say in Ansible), so that failures aren’t seen inside the
cluster?

Or would something in Kubernetes just go and add the master back to the
endpoint?

Alternatively, would it be possible to tell Kubernetes not to add the
individual masters to that endpoint and use a load balancer instead? Say a
private ELB for example?

Or are there future features in kubernetes that will make master failover
more reliable internally?

Thanks,

Joel
On Thu, 28 Jun 2018 at 12:48 pm, Clayton Coleman 
wrote:

> In OpenShift 3.9, when a master goes down the endpoints object should be
> updated within 15s (the TTL on the record for the master).  You can check
> the value of "oc get endpoints -n default kubernetes" - if you still see
> the master IP in that list after 15s then something else is wrong.
>
> On Wed, Jun 27, 2018 at 9:33 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Hi,
>>
>> I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic
>> ELB's doing TCP load balancing.  If I restart masters, from outside the
>> cluster the ELB does the right thing and takes a master out of service.
>> However, if something tries to talk to the kubernetes API inside the
>> cluster, it seems that kubernetes is unaware the master is missing, and I
>> get failures when I'm serially restarting masters.
>>
>> Is there some way that I can point the kubernetes service to use the load
>> balancer?  Maybe I should update the kubernetes endpoint object to use the
>> ELB IP address instead of the actual master addresses?  Is this a valid
>> approach?  Is there some way with openshift-ansible I can tell the
>> kubernetes service to use the load balancer when it creates the kubernetes
>> service?
>>
>>  Thanks,
>>
>> Joel
>>
>>
>> apiVersion: v1
>> kind: Service
>> metadata:
>>   creationTimestamp: '2018-06-27T06:30:50Z'
>>   labels:
>> component: apiserver
>> provider: kubernetes
>>   name: kubernetes
>>   namespace: default
>>   resourceVersion: '45'
>>   selfLink: /api/v1/namespaces/default/services/kubernetes
>>   uid: a224fd75-79d3-11e8-bd57-0a929ba50438
>> spec:
>>   clusterIP: 172.30.0.1
>>   ports:
>> - name: https
>>   port: 443
>>   protocol: TCP
>>   targetPort: 443
>> - name: dns
>>   port: 53
>>   protocol: UDP
>>   targetPort: 8053
>> - name: dns-tcp
>>   port: 53
>>   protocol: TCP
>>   targetPort: 8053
>>   sessionAffinity: ClientIP
>>   sessionAffinityConfig:
>> clientIP:
>>   timeoutSeconds: 10800
>>   type: ClusterIP
>> status:
>>   loadBalancer: {}
>>
>>
>> apiVersion: v1
>> kind: Endpoints
>> metadata:
>>   creationTimestamp: '2018-06-27T06:30:50Z'
>>   name: kubernetes
>>   namespace: default
>>   resourceVersion: '83743'
>>   selfLink: /api/v1/namespaces/default/endpoints/kubernetes
>>   uid: a22a0283-79d3-11e8-bd57-0a929ba50438
>> subsets:
>>   - addresses:
>>   - ip: 10.2.12.53
>>   - ip: 10.2.12.72
>>   - ip: 10.2.12.91
>> ports:
>>   - name: dns
>> port: 8053
>> protocol: UDP
>>   - name: dns-tcp
>> port: 8053
>> protocol: TCP
>>   - name: https
>> port: 443
>> protocol: TCP
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to make 172.30.0.1 (kubernetes service) health checked?

2018-09-10 Thread Joel Pearson
Hi Clayton,

Sorry for the extensive delay, but I’ve been thinking about this more and
I’m wondering if it’s safe to remove a master from the endpoint just before
restarting it (say in Ansible), so that failures aren’t seen inside the
cluster?

Or would something in Kubernetes just go and add the master back to the
endpoint?

Alternatively, would it be possible to tell Kubernetes not to add the
individual masters to that endpoint and use a load balancer instead? Say a
private ELB for example?

Or are there future features in kubernetes that will make master failover
more reliable internally?

Thanks,

Joel
On Thu, 28 Jun 2018 at 12:48 pm, Clayton Coleman 
wrote:

> In OpenShift 3.9, when a master goes down the endpoints object should be
> updated within 15s (the TTL on the record for the master).  You can check
> the value of "oc get endpoints -n default kubernetes" - if you still see
> the master IP in that list after 15s then something else is wrong.
>
> On Wed, Jun 27, 2018 at 9:33 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> Hi,
>>
>> I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic
>> ELB's doing TCP load balancing.  If I restart masters, from outside the
>> cluster the ELB does the right thing and takes a master out of service.
>> However, if something tries to talk to the kubernetes API inside the
>> cluster, it seems that kubernetes is unaware the master is missing, and I
>> get failures when I'm serially restarting masters.
>>
>> Is there some way that I can point the kubernetes service to use the load
>> balancer?  Maybe I should update the kubernetes endpoint object to use the
>> ELB IP address instead of the actual master addresses?  Is this a valid
>> approach?  Is there some way with openshift-ansible I can tell the
>> kubernetes service to use the load balancer when it creates the kubernetes
>> service?
>>
>>  Thanks,
>>
>> Joel
>>
>>
>> apiVersion: v1
>> kind: Service
>> metadata:
>>   creationTimestamp: '2018-06-27T06:30:50Z'
>>   labels:
>> component: apiserver
>> provider: kubernetes
>>   name: kubernetes
>>   namespace: default
>>   resourceVersion: '45'
>>   selfLink: /api/v1/namespaces/default/services/kubernetes
>>   uid: a224fd75-79d3-11e8-bd57-0a929ba50438
>> spec:
>>   clusterIP: 172.30.0.1
>>   ports:
>> - name: https
>>   port: 443
>>   protocol: TCP
>>   targetPort: 443
>> - name: dns
>>   port: 53
>>   protocol: UDP
>>   targetPort: 8053
>> - name: dns-tcp
>>   port: 53
>>   protocol: TCP
>>   targetPort: 8053
>>   sessionAffinity: ClientIP
>>   sessionAffinityConfig:
>> clientIP:
>>   timeoutSeconds: 10800
>>   type: ClusterIP
>> status:
>>   loadBalancer: {}
>>
>>
>> apiVersion: v1
>> kind: Endpoints
>> metadata:
>>   creationTimestamp: '2018-06-27T06:30:50Z'
>>   name: kubernetes
>>   namespace: default
>>   resourceVersion: '83743'
>>   selfLink: /api/v1/namespaces/default/endpoints/kubernetes
>>   uid: a22a0283-79d3-11e8-bd57-0a929ba50438
>> subsets:
>>   - addresses:
>>   - ip: 10.2.12.53
>>   - ip: 10.2.12.72
>>   - ip: 10.2.12.91
>> ports:
>>   - name: dns
>> port: 8053
>> protocol: UDP
>>   - name: dns-tcp
>> port: 8053
>> protocol: TCP
>>   - name: https
>> port: 443
>> protocol: TCP
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to make 172.30.0.1 (kubernetes service) health checked?

2018-06-27 Thread Clayton Coleman
In OpenShift 3.9, when a master goes down the endpoints object should be
updated within 15s (the TTL on the record for the master).  You can check
the value of "oc get endpoints -n default kubernetes" - if you still see
the master IP in that list after 15s then something else is wrong.

On Wed, Jun 27, 2018 at 9:33 AM, Joel Pearson  wrote:

> Hi,
>
> I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic
> ELB's doing TCP load balancing.  If I restart masters, from outside the
> cluster the ELB does the right thing and takes a master out of service.
> However, if something tries to talk to the kubernetes API inside the
> cluster, it seems that kubernetes is unaware the master is missing, and I
> get failures when I'm serially restarting masters.
>
> Is there some way that I can point the kubernetes service to use the load
> balancer?  Maybe I should update the kubernetes endpoint object to use the
> ELB IP address instead of the actual master addresses?  Is this a valid
> approach?  Is there some way with openshift-ansible I can tell the
> kubernetes service to use the load balancer when it creates the kubernetes
> service?
>
>  Thanks,
>
> Joel
>
>
> apiVersion: v1
> kind: Service
> metadata:
>   creationTimestamp: '2018-06-27T06:30:50Z'
>   labels:
> component: apiserver
> provider: kubernetes
>   name: kubernetes
>   namespace: default
>   resourceVersion: '45'
>   selfLink: /api/v1/namespaces/default/services/kubernetes
>   uid: a224fd75-79d3-11e8-bd57-0a929ba50438
> spec:
>   clusterIP: 172.30.0.1
>   ports:
> - name: https
>   port: 443
>   protocol: TCP
>   targetPort: 443
> - name: dns
>   port: 53
>   protocol: UDP
>   targetPort: 8053
> - name: dns-tcp
>   port: 53
>   protocol: TCP
>   targetPort: 8053
>   sessionAffinity: ClientIP
>   sessionAffinityConfig:
> clientIP:
>   timeoutSeconds: 10800
>   type: ClusterIP
> status:
>   loadBalancer: {}
>
>
> apiVersion: v1
> kind: Endpoints
> metadata:
>   creationTimestamp: '2018-06-27T06:30:50Z'
>   name: kubernetes
>   namespace: default
>   resourceVersion: '83743'
>   selfLink: /api/v1/namespaces/default/endpoints/kubernetes
>   uid: a22a0283-79d3-11e8-bd57-0a929ba50438
> subsets:
>   - addresses:
>   - ip: 10.2.12.53
>   - ip: 10.2.12.72
>   - ip: 10.2.12.91
> ports:
>   - name: dns
> port: 8053
> protocol: UDP
>   - name: dns-tcp
> port: 8053
> protocol: TCP
>   - name: https
> port: 443
> protocol: TCP
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to make 172.30.0.1 (kubernetes service) health checked?

2018-06-27 Thread Joel Pearson
Hi,

I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic
ELB's doing TCP load balancing.  If I restart masters, from outside the
cluster the ELB does the right thing and takes a master out of service.
However, if something tries to talk to the kubernetes API inside the
cluster, it seems that kubernetes is unaware the master is missing, and I
get failures when I'm serially restarting masters.

Is there some way that I can point the kubernetes service to use the load
balancer?  Maybe I should update the kubernetes endpoint object to use the
ELB IP address instead of the actual master addresses?  Is this a valid
approach?  Is there some way with openshift-ansible I can tell the
kubernetes service to use the load balancer when it creates the kubernetes
service?

 Thanks,

Joel


apiVersion: v1
kind: Service
metadata:
  creationTimestamp: '2018-06-27T06:30:50Z'
  labels:
component: apiserver
provider: kubernetes
  name: kubernetes
  namespace: default
  resourceVersion: '45'
  selfLink: /api/v1/namespaces/default/services/kubernetes
  uid: a224fd75-79d3-11e8-bd57-0a929ba50438
spec:
  clusterIP: 172.30.0.1
  ports:
- name: https
  port: 443
  protocol: TCP
  targetPort: 443
- name: dns
  port: 53
  protocol: UDP
  targetPort: 8053
- name: dns-tcp
  port: 53
  protocol: TCP
  targetPort: 8053
  sessionAffinity: ClientIP
  sessionAffinityConfig:
clientIP:
  timeoutSeconds: 10800
  type: ClusterIP
status:
  loadBalancer: {}


apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: '2018-06-27T06:30:50Z'
  name: kubernetes
  namespace: default
  resourceVersion: '83743'
  selfLink: /api/v1/namespaces/default/endpoints/kubernetes
  uid: a22a0283-79d3-11e8-bd57-0a929ba50438
subsets:
  - addresses:
  - ip: 10.2.12.53
  - ip: 10.2.12.72
  - ip: 10.2.12.91
ports:
  - name: dns
port: 8053
protocol: UDP
  - name: dns-tcp
port: 8053
protocol: TCP
  - name: https
port: 443
protocol: TCP
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users