[kubernetes-users] Re: Proposal for a new SIG: SIG-GCP

2017-08-11 Thread Jaice Singer DuMars
Hi Michael, With no opposition, it seems that this has approval. For those curious on the process, these are the current guidelines : - Propose the new SIG publicly, including a brief mission

[kubernetes-users] Re: SSH into pod

2017-08-11 Thread Warren Strange
Rather than setting up ssh, it may be easier to use kubectl exec to get a shell inside your pod: kubectl exec my-pod-xxx -it /bin/sh On Friday, August 11, 2017 at 12:11:30 AM UTC-6, Eswari wrote: > > Hi, > > I have exposed my pod externally (public ip). > Tried to ssh to my pod using *ssh

[kubernetes-users] Re: Proposal for a new SIG: SIG-GCP

2017-08-11 Thread Ihor Dvoretskyi
Yes! please, proceed with the formal process. On Fri, Aug 11, 2017, 9:44 PM 'Michael Rubin' via Kubernetes developer/contributor discussion wrote: > Looks like we have enough belief this is useful. Is the next step to > just start forming the SIG? > > mrubin > >

[kubernetes-users] Re: Grafana Data Lost after Minikube restart

2017-08-11 Thread 'Zack Butcher' via Kubernetes user discussion and Q
In the default Istio deployment's configs, Grafana is not set up to write data to any persistent volume; you can see the deployment here . There are a few ways you could

[kubernetes-users] Re: Grafana Data Lost after Minikube restart

2017-08-11 Thread Kamesh Sampath
nothing I did, just a standard Istio on minikube installation. no customizations On Friday, August 11, 2017 at 10:20:17 PM UTC+5:30, Rodrigo Campos wrote: > > (Moving to kubernetes users) > > On Fri, Aug 11, 2017 at 03:30:35AM -0700, Kamesh Sampath wrote: > > > > why frequently i see the

[kubernetes-users] Unable to write into K8S mounted volume

2017-08-11 Thread Kubernetes user discussion and Q
I am trying to mount a directory from the local VM onto my pod and write into it. Local directory on my VM is /app/logs/ The directory I want to write it in on my pod is /logs My RC config is : apiVersion: v1 kind: ReplicationController metadata: name: servicemix-controller spec:

[kubernetes-users] Re: Proposal for a new SIG: SIG-GCP

2017-08-11 Thread 'Adam Worrall' via Kubernetes user discussion and Q
Thanks, Jaice ! I'll followup on Monday. Coincidentally, I'm off on vacation for ~2 weeks starting Tuesday, so there might be a brief hiatus in subsequent action. But hopefully I'll be able to leave knowing that SIG-GCP will indeed become a thing :) - Adam On Fri, Aug 11, 2017 at 12:03 PM

Re: [kubernetes-users] Generally speaking, separate apps = separate clusters, right?

2017-08-11 Thread terencekent
Tim, This is a hugely appreciated answer. Sorry the question was so generic! For now, we'll chose separate clusters - understanding the sub-optimal side effects of that decision. We'll revisit the single-or-multiple cluster question in a year or so, once we've had a reasonable amount of

Re: [kubernetes-users] Generally speaking, separate apps = separate clusters, right?

2017-08-11 Thread 'David Oppenheimer' via Kubernetes user discussion and Q
I think Tim described the situation well. Multi-tenancy is something that we are giving increased attention to now. For many multi-tenant scenarios, the required features are already there, just without a great UX (it's "possible" but not "easy"), and without best-practices docs to pull together

[kubernetes-users] Re: The connection to the server localhost:8080 was refused - did you specify the right host or port?

2017-08-11 Thread AJ NOURI
Le dimanche 14 mai 2017 20:04:41 UTC+2, twel...@noon.com a écrit : > I'm following the kubernetes getting started guide for the first time: > > https://kubernetes.io/docs/getting-started-guides/gce/#starting-a-cluster > > However, when I get to the step: > > kubectl get --all-namespaces

Re: [kubernetes-users] Generally speaking, separate apps = separate clusters, right?

2017-08-11 Thread Brandon Philips
Great overview Tim! This should be an FAQ item somewhere. On Fri, Aug 11, 2017 at 4:55 PM 'Tim Hockin' via Kubernetes user discussion and Q wrote: > This is not an easy question to answer without opining. I'll try. > > Kubernetes was designed to model Borg.

[kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
According to the docs, k8s can support systems of up to 15 pods. (See https://kubernetes.io/docs/admin/cluster-large/) But given k8s' networking model, I'm a bit puzzled on how that would work. It seems like a typical setup is to assign a service-cluster-ip-range with a /16 CIDR. (Say

Re: [kubernetes-users] Unable to write into K8S mounted volume

2017-08-11 Thread Rodrigo Campos
If you are using NFS, then you should configure the NFS (IIRC /etc/exports, but it's been some years now :-D) with proper permissions. Have you done that too? On Fri, Aug 11, 2017 at 9:59 AM, Kubernetes user discussion and Q wrote: > I am trying to mount a

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread Matthias Rampke
Oh hold on. the *service cluster IP range* is not for pod IPs at all. It's for the ClusterIP of services, so you can have up to 64k services in a cluster at the default setting. The range for pods is the --cluster-cidr flag on kube-controller-manager. On Fri, Aug 11, 2017 at 3:05 PM David

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread Ben Kochie
Kuberentes will be giving a /24 to each node, not each pod. Each node will give one IP out of that /24 to a pod it controls. This default means you can have 253 pods-per-node. This of course can be adjust depending on the size of your pods and nodes. This means that you can fully utilize the

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
Ah. That makes a bit more sense. Thanks! DR On 2017-08-11 10:41 am, Ben Kochie wrote: Kuberentes will be giving a /24 to each node, not each pod. Each node will give one IP out of that /24 to a pod it controls. This default means you can have 253 pods-per-node. This of course can be

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
Actually, that begs another question. The docs also specify that k8s can support up to 5000 nodes. But I'm not clear on how the networking can support that. So let's go back to that service-cluster-ip-range with the /16 CIDR. That only supports a maximum of 256 nodes. Now the maximum