Hello-

We've been running k8s on GKE for over a year in various production, 
staging and development clusters (v 1.2x) and have never seen an issue like 
this until our latest cluster which uses 1.4.5 and now 1.4.6.

Since this particular server is a development server, our services are 
restarted every time a build succeeds which is several times daily.  With 
in a few deploys, the services stop working because they are unable to find 
their external dependencies.  The root cause is that DNS is no longer 
working on certain nodes because the kube-proxy and kube-dns pods were 
"evicted" due to "Low resources on the node".  Each node is question is an 
n1-standard (3.75g mem) and we are running very low memory Golang 
applications.  There is no memory pressure from our pods.

Has anyone else seen anything like this on GKE, how do we "fix" a node once 
these pods have been evicted and how do we stop if from happening in the 
future?

Thanks,
-Phil


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... Philip Feairheller
    • Re: [k... Giovanni Tirloni
      • Re... 'Tim Hockin' via Kubernetes user discussion and Q&A
        • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A

Reply via email to