[kubernetes-users] Can't view logs in python pod/container

2017-08-09 Thread David Rosenstrauch
I'm running a python process (django server) in a pod, which writes its output to stdout, but attempting to view the logs with "kubctl logs" shows nothing. I'm similarly unable to view the logs when I run it as a standalone docker process (i.e., using "docker logs") - unless I run the docker

Re: [kubernetes-users] Can't view logs in python pod/container

2017-08-10 Thread David Rosenstrauch
don Philips wrote: Hello David- Can you share the code to your app? Something about the app requires a TTY to print out logs. Alternatively, add `tty: True` to the PodSpec https://kubernetes.io/docs/api-reference/v1.7/#podspec-v1-core Brandon On Wed, Aug 9, 2017 at 9:30 AM David Rosenstrauch

Re: [kubernetes-users] Can't view logs in python pod/container

2017-08-10 Thread David Rosenstrauch
Yep, that did the trick! Thanks, DR On 2017-08-10 4:43 pm, Brandon Philips wrote: What you are doing is fine. Just do kubectl edit deployment custom-django-app and add tty: true to the podspec. I bet it will start working. On Thu, Aug 10, 2017 at 11:10 AM David Rosenstrauch wrote: The

[kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
According to the docs, k8s can support systems of up to 15 pods. (See https://kubernetes.io/docs/admin/cluster-large/) But given k8s' networking model, I'm a bit puzzled on how that would work. It seems like a typical setup is to assign a service-cluster-ip-range with a /16 CIDR. (Say 1

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
adjust depending on the size of your pods and nodes. This means that you can fully utilize the /16 for pods (minus per-node network, broadcast, gateway) On Fri, Aug 11, 2017 at 4:36 PM, David Rosenstrauch wrote: According to the docs, k8s can support systems of up to 15 pods. (See https

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-11 Thread David Rosenstrauch
er 8 bits for the IP address of individual pods, that leaves 12 remaining bits worth of unique IP address ranges. 12 bits = 4095 possible IP addresses for nodes. How then could anyone scale up to 5000 nodes? DR On 2017-08-11 10:47 am, David Rosenstrauch wrote: Ah. That makes a bit more sen

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread David Rosenstrauch
7;s for the ClusterIP of services, so you can have up to 64k services in a cluster at the default setting. The range for pods is the --cluster-cidr flag on kube-controller-manager. On Fri, Aug 11, 2017 at 3:05 PM David Rosenstrauch wrote: Actually, that begs another question. The docs also specif

Re: [kubernetes-users] k8s networking / cluster size limits confusion

2017-08-14 Thread David Rosenstrauch
On 2017-08-14 12:13 pm, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: On Mon, Aug 14, 2017 at 9:03 AM, David Rosenstrauch wrote: So, for example, I have a k8s setup with 4 machines: a master, 2 worker nodes, and a "driver" machine. All 4 machines are on

[kubernetes-users] Excessive mount messages

2017-08-24 Thread David Rosenstrauch
I've noticed that the kubelets running on my nodes are generating an excessive number of log messages like the following: Aug 24 14:23:56 ip-172-31-85-245 kubelet: I0824 14:23:56.5321622272 operation_executor.go:1073] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/956d4bc7-7e

[kubernetes-users] FQDN's for pods?

2017-09-05 Thread David Rosenstrauch
Is it possible to make Kubernetes assign fully-qualified domain names to pods at launch? I know Docker supports this using the "-h" flag (e.g., "docker run -h host1234.ourdomain.com ...") but I don't see a corresponding way to trigger that functionality in containers launched by k8s. We have

Re: [kubernetes-users] FQDN's for pods?

2017-09-05 Thread David Rosenstrauch
On 2017-09-05 5:04 pm, Brandon Philips wrote: That won't do what he wants, I don't think. $ kubectl run -i -t busybox --image=busybox --restart=Never -n team-tectonic --overrides='{ "apiVersion": "v1", "spec": {"hostname": "hello", "subdomain": "example"}}' If you don't see a command prompt, try

Re: [kubernetes-users] FQDN's for pods?

2017-09-05 Thread David Rosenstrauch
On 2017-09-05 5:39 pm, Matthias Rampke wrote: If it's checking the domain suffix, everything should work if you set the cluster domain to a subdomain of yours instead of cluster.local – then the name will be of the form ..pod.., no? We use this in all our clusters, but we make a custom distributi

Re: [kubernetes-users] FQDN's for pods?

2017-09-06 Thread David Rosenstrauch
On 2017-09-05 6:19 pm, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: We do not have a mechanism to express what you want to express, then. You control the cluster suffix and the subdomain, and the pod name, but even with all of those in play, the hostname comes out as `..svc.`, I am

Re: [kubernetes-users] FQDN's for pods?

2017-09-06 Thread David Rosenstrauch
On 2017-09-05 11:08 pm, Quinn Comendant wrote: Perhaps use a wrapper for hostname that returns a simulated hostname if called from your special program: #!/bin/bash if [[ $(ps -o comm= $PPID) == '/your/app/here' ]]; then echo "imitation.hostname.ourdomain.com" else /bin/hostname "$@" fi

Re: [kubernetes-users] FQDN's for pods?

2017-09-06 Thread David Rosenstrauch
ssible.) Thanks, DR On 2017-09-06 6:17 am, Matthias Rampke wrote: This is set via the `--cluster-domain` flag on the kubelet, as well as in the kubedns deployment. /MR On Tue, Sep 5, 2017 at 10:17 PM David Rosenstrauch wrote: That sounds like more along the lines of what I want. How do I go a

Re: [kubernetes-users] FQDN's for pods?

2017-09-06 Thread David Rosenstrauch
On 2017-09-05 6:19 pm, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: We do not have a mechanism to express what you want to express, then. You control the cluster suffix and the subdomain, and the pod name, but even with all of those in play, the hostname comes out as `..svc.`, I am

Re: [kubernetes-users] FQDN's for pods?

2017-09-06 Thread David Rosenstrauch
On 2017-09-06 2:36 pm, Matthias Rampke wrote: TL;DR when you set the cluster domain, this should Just Work™ in Kubernetes 1.7+ but not before That's good news! I'll start to look into us upgrading to a newer version. David – what Kubernete version are you running? We're running v1.5.2.

Re: [kubernetes-users] FQDN's for pods?

2017-09-08 Thread David Rosenstrauch
On 2017-09-06 2:42 pm, David Rosenstrauch wrote: On 2017-09-06 2:36 pm, Matthias Rampke wrote: TL;DR when you set the cluster domain, this should Just Work™ in Kubernetes 1.7+ but not before That's good news! I'll start to look into us upgrading to a newer version. Hmmm ... som

Re: [kubernetes-users] Error deploying the DNS Add-on

2017-09-14 Thread David Rosenstrauch
On 2017-09-14 12:43 pm, gokhan.se...@gmail.com wrote: Then, I follow the steps at https://coreos.com/kubernetes/docs/1.6.1/deploy-addons.html to deploy DNS add-on. After that step, I see the kube-dns pod stays at ContainerCreating status forever. kube-dns-v20-htqvx0/3 ContainerCreatin

[kubernetes-users] Container termination force pod termination?

2017-10-27 Thread David Rosenstrauch
I have a pod which runs a single container. The pod is being run under a ReplicaSet (which starts a new pod to replace a pod that's terminated). What I'm seeing is that when the container within that pod terminates, instead of the pod terminating too, the pod stays alive, and just restarts

Re: [kubernetes-users] Container termination force pod termination?

2017-10-27 Thread David Rosenstrauch
Was speaking to our admin here, and he offered that running a health check container inside the same pod might work. Anyone agree that that would be a good (or even preferred) approach? Thanks, DR On 2017-10-27 11:41 am, David Rosenstrauch wrote: I have a pod which runs a single container

Re: [kubernetes-users] Container termination force pod termination?

2017-10-27 Thread David Rosenstrauch
en if the whole pod is restarted, that problem is still there. And restarting the whole pod won't solve that. So probably my guess is not correct about what you are trying to solve. So, sorry, but can I ask again what is the problem you want to address? :) On Friday, October 27, 2017, David

Re: [kubernetes-users] Container termination force pod termination?

2017-10-27 Thread David Rosenstrauch
rting the whole pod won't solve that. So probably my guess is not correct about what you are trying to solve. So, sorry, but can I ask again what is the problem you want to address? :) On Friday, October 27, 2017, David Rosenstrauch wrote: Was speaking to our admin here, and he off

[kubernetes-users] Expose individual pods externally?

2017-10-30 Thread David Rosenstrauch
Hi. I'm having some issues migrating an (admittedly somewhat unconventional) existing system to a containerized environment (k8s) and was hoping someone might have some pointers on how I might be able to work around them. A major portion of the system is implemented using what basically are

Re: [kubernetes-users] Expose individual pods externally?

2017-10-31 Thread David Rosenstrauch
Hi Tim. Thanks much for your response, and I appreciate your suggestions. Discussion inline. On 2017-10-31 12:24 am, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: On Mon, Oct 30, 2017 at 7:56 PM, David Rosenstrauch wrote: The problem is that the way the syste

Re: [kubernetes-users] Expose individual pods externally?

2017-10-31 Thread David Rosenstrauch
On 2017-10-30 10:56 pm, David Rosenstrauch wrote: Another possible way for me to work around this problem is that I could probably eliminate the "pets" constraint I'm bumping up against if I were able to run the pods behind a customized Service/load balancer that was a bit smar

Re: [kubernetes-users] Expose individual pods externally?

2017-11-02 Thread David Rosenstrauch
On 2017-10-31 1:58 pm, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: Another option would be to use HostNetwork, and just use a random port, and self-register your replicas in whatever registry (assuming you don't care about port numbers being random). Some game servers do exactly t

Re: [kubernetes-users] How to make a container(s) to able to reach to ClusterIP:port or Service's Publilc IP:NodePort?

2017-11-30 Thread David Rosenstrauch
A container can access the Kubernetes API, and query it for whatever information you need, about any k8s component. For example, I do the following in some of my containers to dynamically lookup the IP of the kube-dns service: curl -s \ --cacert /var/run/secrets/kubernetes.io/servicea

Re: [kubernetes-users] How to put in communication two clusters in Kubernetes

2017-12-12 Thread David Rosenstrauch
On 2017-12-12 4:38 pm, Marco De Rosa wrote: The main reason is that the "web" cluster has hardware features different from the "db" cluster and I didn't find a way to have a cluster with for example one node better, in cpu and/or ram, than others. So 2 clusters to put in communication with the do

[kubernetes-users] Any way to list all ingress paths?

2018-04-12 Thread David Rosenstrauch
Is there any way to produce a comprehensive list of all the paths that are defined in the ingress controller? (And all the services they map to.) The closest thing I've found is: kubectl describe ing But that generates a lot of verbose output. Thanks, DR -- You received this message becaus

Re: [kubernetes-users] Re: Any way to list all ingress paths?

2018-04-13 Thread David Rosenstrauch
, "servicePort": 443 }, "path": "/service-bar/path" } for me. You can tweak the jq expression for extra slicing and dicing. HTH Timo On Thursday, April 12, 2018 at 5:26:36 PM UTC+2, David Rosenstrauch wrote: Is there any way to produce a comprehensive list

Re: [kubernetes-users] Kubernetes ingress

2018-04-27 Thread David Rosenstrauch
If you were using the nginx ingress, you would do it like this: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /data nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: pat

[kubernetes-users] Best practice for running variants of k8s services?

2018-04-27 Thread David Rosenstrauch
We've been using Kubernetes to get a dev version of our environment up and running, and so far the experience has been great - nearly a dozen services up and running, and Kubernetes has made the whole process very straight-forward. However, we're now looking at moving this implementation towar

Re: [kubernetes-users] Best practice for running variants of k8s services?

2018-04-30 Thread David Rosenstrauch
ssion and Q&A wrote: Does this head in the direction you want? https://github.com/kubernetes/kubectl/tree/master/cmd/kustomize On Fri, Apr 27, 2018 at 10:52 PM David Rosenstrauch wrote: We've been using Kubernetes to get a dev version of our environment up and running, and so far

Re: [kubernetes-users] Get Deployment annotation from a Kubernetes Pod

2018-05-01 Thread David Rosenstrauch
kubectl get deployment myapp -o json You could then run it through a json parser to strip out what you want. E.g.: kubectl get deployment -o json | jq -r '.items[].metadata.annotations' | less HTH, DR On 05/01/2018 01:14 PM, Kir Shatrov wrote: Hi all, I'm looking for some help with expo

Re: [kubernetes-users] Get Deployment annotation from a Kubernetes Pod

2018-05-01 Thread David Rosenstrauch
l-proxy HTH, DR On 05/01/2018 02:13 PM, David Rosenstrauch wrote: kubectl get deployment myapp -o json You could then run it through a json parser to strip out what you want. E.g.: kubectl get deployment -o json | jq -r '.items[].metadata.annotations' | less HTH, DR On 05/

Re: [kubernetes-users] Re: deployment not creating pod at all

2018-05-07 Thread David Rosenstrauch
It looks like the pods died for some reason. Try a "kubectl describe pod" and/or a "kubectl logs" on one of the 2 dead pods to see what happened. HTH, DR On 05/07/2018 09:28 AM, vidhyashankar...@gmail.com wrote: On Tuesday, 16 May 2017 01:06:20 UTC+5:30, che...@gmail.com wrote: Dear Expert

Re: [kubernetes-users] How to set up a FTP Server on GKE?

2018-05-15 Thread David Rosenstrauch
Read up on FTP passive mode and you'll understand why. In passive mode, the client first talks to the server on port 21, and then the server picks another randomly assigned port to listen on which the client then communicates with going forward. Kubernetes (and docker) won't have that randoml

Re: [kubernetes-users] How do pods communicate?

2018-05-16 Thread David Rosenstrauch
It all depends on how you have your volume mounts set up. If your pods are just mounting local storage, then they won't be able to see each other's files. However, there are several options for having multiple pods mount the same shared folder: NFS, GlusterFS, Ceph, etc. Read here: https://

Re: [kubernetes-users] http redirect to https/

2018-05-20 Thread David Rosenstrauch
On 2018-05-20 6:28 am, sh...@teclaone.com wrote: Completed the setup of Phpmy admin, Wordpress, MySql, Ngnix and it is fine, load balancing with http. Now I am having Issues with http redirecting to https and where to connect my existing ssl certificates. Are you saying that it's automatically

[kubernetes-users] How to monitor/alert on container/pod death or restart

2018-08-08 Thread David Rosenstrauch
As we're getting ready to go to production with our k8s-based system, we're trying to pin down exactly how we're going to do all the needed monitoring/alerting for it. We can easily collect many of the metrics we need (using kube-state-metrics to feed into prometheus, and/or Datadog) and alert

Re: [kubernetes-users] How to monitor/alert on container/pod death or restart

2018-08-08 Thread David Rosenstrauch
a container or pod. Any pointers on how I might go about setting up an alert like that? Thanks, DR On 08/08/2018 04:45 PM, Marcio Garcia wrote: Hi David, You can use DataDog to achieve this. On 8/8/18, David Rosenstrauch wrote: As we're getting ready to go to production with our k8s

Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-06 Thread David Rosenstrauch
FWIW, I recently ran into a similar issue, and the way I handled it was to have each of the pods mount an NFS shared file system as a PV (AWS EFS, in my case) and have each pod write its output into a directory on the NFS share. The only issue then is just to make sure that each pod writes it'

Re: [kubernetes-users] Re: Autoscale volume and pods simultaneously

2018-09-07 Thread David Rosenstrauch
On 9/6/18 11:38 PM, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: On Thu, Sep 6, 2018, 3:33 PM David Rosenstrauch wrote: FWIW, I recently ran into a similar issue, and the way I handled it was to have each of the pods mount an NFS shared file system as a PV (