Re: [kubernetes-users] Weird GKE-GCE routing behavior

2017-07-04 Thread 'Tim Hockin' via Kubernetes user discussion and Q
We're you testing weave or running g it for real? On Jul 4, 2017 7:28 AM, "Itamar O" wrote: > solved it. > I found a collision in the routing table of the 10.240.0.2 host... > # route > Kernel IP routing table > Destination Gateway Genmask Flags Metric

[kubernetes-users] Re: Unable to connect to Internet from a container deployed in Kubernetes

2017-07-04 Thread kuzmentsov
суббота, 15 октября 2016 г., 8:26:17 UTC+3 пользователь Gopi написал: > Getting help in the Kubernetes-users slack channel. Thank you. Mr Gopi, could you please share the result of your research on this matter? Sounds like problem you had got resolved. I got here exactly the same problem.

Re: [kubernetes-users] Weird GKE-GCE routing behavior

2017-07-04 Thread Itamar O
solved it. I found a collision in the routing table of the 10.240.0.2 host... # route Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface default 10.240.0.1 0.0.0.0 UG0 00 eth0 *10.32.0.0 *

Re: [kubernetes-users] Critical pod kube-system_elasticsearch-logging-0 doesn't fit on any node.

2017-07-04 Thread Warren Strange
Sometimes it takes a while for PVs to be provisioned - so this error often goes away if you give it time. If the PVC eventually gets bound, this is probably not the issue. It looks like you are running out of memory or CPU. kubectl describe on the pod should tell you which. You either need

[kubernetes-users] Create K8S with my own certificates

2017-07-04 Thread Eddie Mashayev
Hi, I have a question regarding *kube-up.sh* how do I use my own certificates and not the default generated by the script. Is there any better way to create K8S on GCE(*Not GKE*) other then *kube-up.sh* -- You received this message because you are subscribed to the Google Groups "Kubernetes

Re: [kubernetes-users] Applying resource limit after pod startsup

2017-07-04 Thread Matthias Rampke
Philosophically, the problem is what Kubernetes could do with the reclaimed CPU. The pod could restart at any time, so it can't really promise this CPU time to a different pod. It can let others use this on a best effort basis, but that's already the case when you make a request and don't use it