Hi, 

In production we have 2 pods with multiple containers under the statefulset 
controllers. If one of the node hosting this pod is not reachable, the pods 
under the statefulset will move to "unknown" state  and the node will be in 
"Node Not Ready" state itself. This status will not change as long as the 
node is in not ready state. I have seen a change introduced as part of k8s 
1.5 release - https://github.com/kubernetes/kubernetes/pull/35235

Does this mean that , if the statefulset controller is used for the 
managing the pods, and node is not reachable, then the admin has to 
forcefully delete the pod to clear the stale session in the api server?

We are currently using the k8s 1.5.2. Could anyone pls let me know if there 
is a better way to remove the hanging pods in unknown state in such failure 
scenarios.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernete... Praveen George
    • [kube... 'Anirudh Ramanathan' via Kubernetes user discussion and Q&A

Reply via email to