[kubernetes-users] Re: nginx ingress controller not routing https

2017-04-14 Thread Warren Strange


I have the same issue.

Possibly https://github.com/kubernetes/ingress/issues/277  ?



On Friday, April 14, 2017 at 9:04:23 AM UTC-6, Daniel Watrous wrote:
>
> I am using the nginx ingress controller on two k8s clusters. On one the 
> HTTPS works as expected, but on the other HTTPS traffic always routes to 
> the default 404 backend. I'm not sure how to troubleshoot this.
>
> I have the TLS secret setup and the ingress references it. The ingress 
> controller does serve up https, but only the default 404 backend. A few 
> lines from the ingress controller logs:
>
> 127.0.0.1 - [127.0.0.1] - - [14/Apr/2017:02:15:15 +] "GET 
> /login?from=%2F HTTP/2.0" 404 142 "-" "Mozilla/5.0 (Macintosh; Intel Mac 
> OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 
> Safari/537.36" 29 0.002 [upstream-default-backend] 10.200.46.5:8080 21 
> 0.002 404
> :::10.200.41.0 [14/Apr/2017:02:15:16 +] TCP [] [nginx-ssl-backend] 
> 200 0 0 0.025
> :::10.200.35.0 [14/Apr/2017:02:15:03 +] TCP [jenkins.brdos1.k8s-
> dev.company.com] [nginx-ssl-backend] 200 215 51 0.059
> :::10.200.35.0 - [:::10.200.35.0] - - [14/Apr/2017:15:00:11 +] 
> "GET 
> /login?from=%2F HTTP/1.1" 200 1826 "-" "Mozilla/5.0 (Macintosh; Intel Mac 
> OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 
> Safari/537.36" 471 0.008 [shared-tools-jenkins-service-8080] 10.200.33.2:
> 8080 1814 0.008 200
>
> The first log represents a valid request to the correct route, but it's 
> returning a 404.
> I'm not sure what triggers the next two lines, but the third line is the 
> route that I would expect to serve my application (
> jenkins.brdos1.k8s-dev.company.com).
> The fourth line shows a call to the same ingress route, but over http. 
> This does serve my application. Here's my ingress.yaml. Any ideas?
>
> apiVersion: extensions/v1beta1
> kind: Ingress
> metadata:
> name: jenkins-ingress
> namespace: shared-tools
> spec:
> tls:
> - hosts:
> - jenkins.brdos1.k8s-dev.company.com
> secretName: jenkins-secret
> rules:
> - host: jenkins.brdos1.k8s-dev.company.com
> http:
> paths:
> - backend:
> serviceName: jenkins-service
> servicePort: 8080
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Abstracting environmental differences in config files

2017-03-09 Thread Warren Strange

Helm works nicely for this.  

You can template out things like the registry, image names, etc., and then 
have a small  values.yaml file that sets the values for the target 
platform. 





On Wednesday, March 8, 2017 at 11:56:45 PM UTC-7, ag...@jantox.com wrote:
>
> I am currently running a Kubernetes cluster on my local dev environment. 
> It's been a pretty magical experience so far, but I have a few questions 
> about config files as I'm getting ready to deploy to production.
>
> My main concern boils down to: is there any form of indirection for 
> specifying configuration in development vs. production? For example, I use 
> a private registry in dev (local) and prod, and in each environment I want 
> it to pull from the correct registry. I could just keep the same URL and 
> use /etc/hosts to redirect the dev registry to the local one, but I feel 
> like that's a weak solution. Another example would be the command I run for 
> a container in dev and prod are different (e.g. "--dev --log-to-console" vs 
> "--prod --log-to-file"). Again, I could probably get away with using 
> configmaps w/ environmental variables to hold the entire command and 
> interpolating it into the container command, but it feels kind of janky to 
> do that.
>
> If I recall correctly, there was some proposal for a Kubernetes templating 
> system for the configurations themselves, but that's not a hard feature yet.
>
> My solution for now is to create one configuration for dev and one for 
> production to account, each pointing to the correct registry. But a large 
> chunk of the files are the same, and I have to make sure everything else is 
> consistent.
>
> Have you guys dealt with this and if so, could you offer some suggestions 
> on how you tackled it?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Application is slow in non production env

2017-03-13 Thread Warren Strange

I think you need to provide more information for people to help you out.

Are nodes in the cluster running out of memory or CPU resources? Is disk 
I/O slow?  

If you want to increase the
size of the nodes (more CPU, memory) the procedure will depend on the 
Kubernetes environment you are deploying to. 



On Sunday, March 12, 2017 at 8:13:52 AM UTC-6, mathew...@gmail.com wrote:
>
>
> The Spring boot application in non prod env is slow . We have not  set any 
> memory or cpu  limits 
>
> Could you how to increase the  CPU and memory  capacity of  existing 
> Node/Cluster/Container  in order to increase the processing speed for non 
> production environment ?
>
> Please note the application is running as expected in prod envir0nment
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: local LoadBalancer/Service in Minikube

2017-03-06 Thread Warren Strange

If you are using VirtualBox with minikube, it will mount the /Users/xxx/ 
folder in the VM.

You can use a hostPath volume to mount a local folder on your mac on to a 
pod volume. 

hostPath:
  path: /Users/my-username/Downloads/example



On Friday, March 3, 2017 at 7:22:32 PM UTC-7, Imran Akbar wrote:
>
> Figured it out - the Pod was crashing after the service tried to start. 
>  Once I fixed that everything worked.
>
> But I still can't figure out how to do hot reload of code locally.
> I have to delete and re-create the deployment for it to pick up the latest 
> code that's mounted to the volume via hostPath.
> Is there any way to have Kubernetes share the folder live?
>
> thanks
>
> On Friday, March 3, 2017 at 12:55:27 PM UTC-8, Imran Akbar wrote:
>>
>> Hi,
>>
>> I'm trying to expose my Deployment to a port which I can access through 
>> my local computer via Minikube.
>>
>> I have tried two YAML configurations (one a load balancer, one just a 
>> service exposing a port).
>> I: http://pastebin.com/gL5ZBZg7
>> II: http://pastebin.com/sSuyhzC5
>>
>> The deployment and the docker container both expose port 8000.
>>
>> The first results in a service with a port which never finishes, and the 
>> external IP never gets assigned.
>> The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard 
>> and when I try "minikube service bot" nothing happens.
>>
>> I am on Mac OS X.
>>
>> How can I set this up properly?
>>
>> thanks,
>> imran
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: SSH into pod

2017-08-11 Thread Warren Strange

Rather than setting up ssh, it may be easier to use kubectl exec to get a 
shell inside your pod:

kubectl exec my-pod-xxx -it /bin/sh 



On Friday, August 11, 2017 at 12:11:30 AM UTC-6, Eswari wrote:
>
> Hi,
>
> I have exposed my pod externally (public ip).
> Tried to ssh to my pod using *ssh root@Pod_PublicIP *from my local linux 
> box 
>
> But I am unable to take ssh.
> Is it possible to take ssh to my pod 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: SSH into pod

2017-08-14 Thread Warren Strange

Yes, it is possible, but it is not recommended.  Here is an older article 
that discusses the issues:

https://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/ 


If you *really* need to do this, you must enable sshd in the container, and 
create a kubernetes service to reach it. You will want to read up on 
services:

https://kubernetes.io/docs/concepts/services-networking/service/ 



On Monday, August 14, 2017 at 12:48:32 AM UTC-6, eswar...@gmail.com wrote:
>
> Hi Warren Strange,
>
> Thanks for the reply.
>
> Yes, But we can use this command where we installed kubectl only.
> But I need to take ssh from my local linux machine where I doesn't have 
> kubectl.
>
> Is it possible?
>
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Critical pod kube-system_elasticsearch-logging-0 doesn't fit on any node.

2017-07-04 Thread Warren Strange

Sometimes it takes a while for PVs to be provisioned - so this error often 
goes away if you give it time.  If the PVC eventually gets bound, this is 
probably not the issue.


It looks like you are running out of memory or CPU. kubectl describe on the 
pod should tell you which.  You either need to add more resources (more 
nodes, or bigger nodes) or adjust the resources:  down



On Saturday, July 1, 2017 at 3:19:55 PM UTC-4, Norman Khine wrote:
>
> inspecting the logs, it seems on 1.7.0 i am not able to bind volumes and 
> have this error:
>
> PersistentVolumeClaim is not bound: 
> "es-persistent-storage-elasticsearch-logging-0" (repeated 6 times)
>
> is it an issue with my setup of the cluster or is there a change i have 
> missed?
>
> On 1 July 2017 at 17:16, Norman Khine  
> wrote:
> >
> > I have just setup a k8s 1.7.0, but get this error Critical pod 
> kube-system_elasticsearch-logging-0 doesn't fit on any node. and therefore 
> >
> > ➜  tack git:(develop) ✗ kubectl get pods --all-namespaces   
> 
>   
>  (git)-[develop]
> > NAMESPACE NAME   
> READY STATUSRESTARTS   AGE
> > default   busybox   
>  1/1   Running   1  1h
> > kube-system   cluster-autoscaler-2018616338-4sndj   
>  1/1   Running   0  1h
> > kube-system   elasticsearch-logging-0   
>  0/1   Pending   0  1h
> > kube-system   fluentd-6n248 
>  1/1   Running   0  1h
> > kube-system   fluentd-c0jw0 
>  1/1   Running   0  1h
> > kube-system   fluentd-srpf8 
>  1/1   Running   0  1h
> > kube-system   fluentd-wf52g 
>  1/1   Running   0  1h
> > kube-system   fluentd-z3m7w 
>  1/1   Running   0  1h
> > kube-system   heapster-v1.3.0-634771249-xz51g   
>  2/2   Running   0  1h
> > kube-system   kibana-logging-3751581462-f8j2k   
>  1/1   Running   0  1h
> > kube-system   kube-apiserver-ip-10-0-10-10.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-apiserver-ip-10-0-10-11.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-apiserver-ip-10-0-10-12.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   
> kube-controller-manager-ip-10-0-10-10.eu-west-2.compute.internal   1/1 
>   Running   0  1h
> > kube-system   
> kube-controller-manager-ip-10-0-10-11.eu-west-2.compute.internal   1/1 
>   Running   0  1h
> > kube-system   
> kube-controller-manager-ip-10-0-10-12.eu-west-2.compute.internal   1/1 
>   Running   0  1h
> > kube-system   kube-dns-2255216023-2x13h 
>  3/3   Running   0  1h
> > kube-system   kube-dns-2255216023-k1m0m 
>  3/3   Running   0  52m
> > kube-system   kube-dns-autoscaler-3587138155-dkx79   
> 1/1   Running   0  1h
> > kube-system   kube-proxy-ip-10-0-10-10.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-proxy-ip-10-0-10-11.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-proxy-ip-10-0-10-12.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-proxy-ip-10-0-10-20.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-proxy-ip-10-0-10-247.eu-west-2.compute.internal   
> 1/1   Running   0  1h
> > kube-system   kube-rescheduler-2136974456-ldn6z 
>  1/1   Running   0  1h
> > kube-system   kube-scheduler-ip-10-0-10-10.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-scheduler-ip-10-0-10-11.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kube-scheduler-ip-10-0-10-12.eu-west-2.compute.internal   
>  1/1   Running   0  1h
> > kube-system   kubernetes-dashboard-2227282072-xd27l 
>  1/1   Running   0  52m
> >
> >
> >
> > my instance type is `t2.large` and i am on AWS, kubernetes 1.6.6 works 
> fine.
> >
> > any 

[kubernetes-users] Re: Information Sharing; Distributed Computing

2017-06-22 Thread Warren Strange

DNS is used for service name lookup, but there is no shared memory between 
pods. 

On Thursday, June 22, 2017 at 9:57:52 AM UTC-6, Tobias Rahloff wrote:
>
> Can sb point me towards sources that explain how information sharing in 
> k8s works? Especially in a academic, distributed computing sense.
>
> If I understand correctly, Pods/Clusters use DNS to distribute traffic 
> between dynamically scaled containers and have some kind of shared memory? 
> Kind Regards 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Docker vs K8s API

2017-06-02 Thread Warren Strange

To echo what Matthias has said, Kubernetes is doing a lot more housekeeping 
work behind the scenes than just docker run. 

You can see Kube is adding about one second of overhead. If your containers 
typically run for a just a second or two, that is probably not a good fit 
for Kube (or even docker, for that matter).   Some kind of worker model 
would be a better approach.



On Friday, June 2, 2017 at 8:41:23 AM UTC-6, Diego Lapiduz wrote:
>
> Thanks Matthias, yes this is a bit of a problem for my use case. Ideally 
> I'd like to run a fresh container every time that is why I am trying to do 
> this.
> The weird thing is that Kubernetes is the backend in both cases, I am just 
> using the Docker API in one case and the kubernetes one in the other. Not 
> sure what the difference is but there should be a way to replicate what 
> docker run is doing with kubectl (or their API equivalents).
>
> On Fri, Jun 2, 2017 at 9:38 AM, Matthias Rampke  > wrote:
>
>> Is this difference a problem for your use case? Kubernetes does do more 
>> work before a pod starts. If you need low-latency execution you'll have to 
>> use long-running worker processes of some form. Once it's started, it 
>> should be just as fast.
>>
>> On Fri, Jun 2, 2017 at 2:13 PM Diego Lapiduz > > wrote:
>>
>>> This is a very short lived task (just runs `echo hello`) so the only 
>>> thing I really care about is getting that `hello` back.
>>>
>>> This is what I did in text format:
>>>
>>> ```
>>> ➜  eval $(minikube docker-env)
>>>
>>> ➜  time docker run dlapiduz/hello-world
>>>
>>> hello
>>> docker run dlapiduz/hello-world  0.07s user 0.02s system 30% cpu 0.318 
>>> total
>>> ➜  time kubectl run --image=dlapiduz/hello-world test --attach 
>>> --restart=Never
>>> Waiting for pod default/test to be running, status is Pending, pod 
>>> ready: false
>>> hello
>>> kubectl run --image=dlapiduz/hello-world test --attach --restart=Never 
>>>  0.09s user 0.02s system 7% cpu 1.357 total
>>> ```
>>>
>>> On Thu, Jun 1, 2017 at 11:27 PM, 'Tim Hockin' via Kubernetes user 
>>> discussion and Q  wrote:
>>>
 It runs faster or it starts faster?  The gif clear too quickly for me 
 to see.

 On Thu, Jun 1, 2017 at 9:09 PM, Diego Lapiduz  wrote:

> Hi y'all, (k8s noob here so forgive me if this is something that I am 
> doing obviously wrong)
>
> I am trying to run a short lived task and I am trying to move from 
> Docker Swarm to Kubernetes. An interesting issue that I am finding is 
> that 
> running the same Docker image on the same minikube cluster is much faster 
> using the docker cli tool (or docker api) than kubectl.
>
> Here is a quick screen cap: I know that I am probably doing something 
> off, any idea what it could be? Thanks!
>
>
>
> 
>
>
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to kubernetes-use...@googlegroups.com .
> To post to this group, send email to kubernet...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

 -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups "Kubernetes user discussion and Q" group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/kubernetes-users/MArCwIOYWo0/unsubscribe
 .
 To unsubscribe from this group and all its topics, send an email to 
 kubernetes-use...@googlegroups.com .

>>>
 To post to this group, send email to kubernet...@googlegroups.com 
 .
 Visit this group at https://groups.google.com/group/kubernetes-users.
 For more options, visit https://groups.google.com/d/optout.

>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to kubernetes-use...@googlegroups.com .
>>> To post to this group, send email to kubernet...@googlegroups.com 
>>> .
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Kubernetes user discussion and Q" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/kubernetes-users/MArCwIOYWo0/unsubscribe
>> .
>> To unsubscribe from this group and all its 

[kubernetes-users] Re: using image from local directory

2017-09-20 Thread Warren Strange


ImagePullBackOff means that Kubernetes can not find the image. 

You have:
 image: agentc

You need:

 image: library/agentc:latest

This also assumes you have done a "docker build" direct to the docker 
daemon that your Kubernetes cluster is using. 
If you are using minikube, you must make sure you are pointing at the right 
docker daemon:

eval $(minikube docker-env)
docker build 


On Wednesday, September 20, 2017 at 2:20:58 AM UTC-6, paperless wrote:
>
> I have created an image using Docker command 
>  docker build -t library/app-agentk .
>
> Then I have config file 
>
> piVersion: apps/v1beta1
> kind: Deployment
> metadata:
>   name: agent-deployment
> spec:
>   replicas: 1 # tells deployment to run 2 pods matching the template
>   template: # create pods using pod definition in this template
> metadata:
>   # unlike pod-nginx.yaml, the name is not included in the meta data 
> as a unique name is
>   # generated from the deployment name
>   labels:
> app: agentk
> spec:
>   containers:
>   - name: agentk1
> image: agentc
>
> I use following command to run the image. 
>
> kubectl apply -f deployment.yaml
>
>
> I get following error
> agent-deployment-4152082668-kx39q
>
> 
> Waiting: ImagePullBackOff
> 0
> 3 minutes
> subject 
> 
> more_vert
> Failed to pull image "agentc": rpc error: code = 2 desc = Error: image 
> library/agentc:latest not found
> Error syncing pod
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Low throughput on K8s LoadBalancer

2017-09-19 Thread Warren Strange

Debugging performance issues on Docker/Kube can be interesting

You could try exposing the service through a Nodeport, and run your 
benchmark directly against the node IP. That would at least tell you if the 
GKE LB is a factor or not. 

Also - are your pods possibly CPU or memory limited (i.e, have you 
explicitly set resource limits - making Kube throttle your pods?)


Please share your findings!


On Tuesday, September 19, 2017 at 12:25:05 AM UTC-6, Vinoth Narasimhan 
wrote:
>
> Environment:
>
> Kubernetes version (use kubectl version):
>  kubectl version
> Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", 
> GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", 
> BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", 
> Platform:"linux/amd64"}
> Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9", 
> GitCommit:"a3d1dfa6f433575ce50522888a0409397b294c7b", GitTreeState:"clean", 
> BuildDate:"2017-08-23T16:58:45Z", GoVersion:"go1.7.6", Compiler:"gc", 
> Platform:"linux/amd64"}
>
> Cloud provider or hardware configuration**:
>
> Google Container Engine.
>
> What happened:
>
> We are in testing phase of springboot based microservice deployment on 
> GKE. During testing QA filed a performance issue , stats that the 
> throughput of the service in k8s is low when compared to run the java app in
>
> java -jar method
> docker run
> For testing i skip those springboot stuff and take native tomcat home page 
> as the test bed for the "ab" testing.
>
> The test setup looks like:
>
> Create an 8cpu/30Gig RAM ubuntu server in GCP and install native 
> tomact-8.5.20(80) and test the home page.
>
> Stop the native tomcat. Create the docker tomcat instances on the same 
> host and test the same home page.
> The docker version is: Version: 17.06.2-ce
>
> Create the 3 node K8s cluster 1.6.9. Run the tomcat deployment the same 
> 8.5.20 and expose the service through LB and test the same home page.
>
> I install the ab tool in other GCP instances and hit the above 3 different 
> endpoints.
>
> What's the Result:
>
> The first 2 test with native tomcat and docker run the throughput i got is 
> nearly 8k Req/sec on avg on different request/concurrent level.
>
> But the same on K8s LB the throughput i got on the average of 2k req/sec 
> on avg on different request/concurrency level.
>
> Is this something am i missing on the test. Or this is how the GKE LB 
> store and forward the request at this rate.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Multiple version of software on same namespace

2017-10-31 Thread Warren Strange

What you are describing is a really good use case for namespaces.

If you really want to deploy multiple instances to the same namespace, you 
could have a look at how some of the Helm charts do this.  

Some charts use dynamic labels to  (e.g. app={{ .Release.Name }} ) to 
distinguish multiple instances of the same application.  You 
can "helm install" many copies, with each one getting unique labels that 
can be used by your service label selectors.

 



On Tuesday, October 31, 2017 at 8:08:54 AM UTC-6, rgonc...@gmail.com wrote:
>
> On Tuesday, October 31, 2017 at 7:41:45 AM UTC, Rodrigo Campos wrote:
> > On Mon, Oct 30, 2017 at 03:34:00AM -0700, rgonc...@gmail.com 
>  wrote:
> > > I'm trying to figure out what's the best approach to deploy multiple 
> versions of the same software in kubernetes without relying on namespaces. 
> According to the docs: 
> > > 
> > > "It is not necessary to use multiple namespaces just to separate 
> slightly different resources, such as different versions of the same 
> software: use labels to distinguish resources within the same namespace."
> > > 
> > > The only way (that I know of) to separate multiple versions of same 
> software on the same namespace is naming services in accordance to software 
> version, adjust the selector field and tag pods appropriately. This has 
> maintenance overhead and I'm required to reference services with a 
> different name according to the desired version. I don't think this is a 
> solution.
> > 
> > Why not? What is the problem you want to solve?
> > 
> > > 
> > > I don't see any other way besides using namespaces. What am I missing 
> something?
> > 
> > I think services is the way to do it, with labels on deplyoments, but I 
> might be
> > missing the details of what you want to do. Can you pelase elaborate?
>
> Basically I want to be able to deploy two distinct versions of the same 
> software on the same kubernetes cluster. This is a cluster used for 
> development and it's usual to have multiple versions of the same software 
> (ex. maintenance version and evolution version).
>
> I can create one namespace for each version, but I'd rather not because 
> I'd like to limit resource usage per software, not per software version. 
> But without using namespaces, the only way (like I said, that I'm aware 
> of), is to codify the version of the service on the name (serviceA-v1.1; 
> serviceA-trunk; etc...). Definitely I don't want to do that. Doing that 
> implies changing kubernetes deployment descriptors, clients must change the 
> service they reference, etc... not a good idea.
>
> I'm asking because the statement on the docs saying the using namespaces 
> it's not necessary to separate multiple versions of the same software. 
> However I think it is.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Alternative to Dind (docker-in-docker) sidecar container

2018-05-06 Thread Warren Strange

This is likely a better bet than DIND: 
https://github.com/GoogleContainerTools/kaniko 


On Saturday, May 5, 2018 at 9:02:08 AM UTC-6, Sudha Subramanian wrote:
>
> Hi,
>
> I have a use case where my application container needs to pull a build 
> image and run code inside of it. I'm considering using a DIND sidecar 
> container and have the outer container run docker commands within the 
> sidecar.  Requests for builds are queued in RabbitMq and gets consumed by 
> my application container. 
>
> I'm wondering if there is a better option using K8 Jobs insead. Is there a 
> way I can dynamically launch a POD from a container running in a different 
> POD?
>
> Thanks,
> Sudha
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Measuring disk latency on GKE

2018-02-02 Thread Warren Strange

That worked great - thanks for the clue stick. 

On Friday, February 2, 2018 at 12:22:53 AM UTC-7, Ahmet Alp Balkan wrote:
>
> You can have CoreOS toolbox on GKE COS nodes: 
> https://cloud.google.com/container-optimized-os/docs/how-to/toolbox
>
> Just type "toolbox" when you SSH into the node.
>
> On Thu, Feb 1, 2018 at 8:20 PM, Warren Strange <warren@gmail.com 
> > wrote:
>
>>
>>
>> Stackdriver will show me disk IOPS and throughput for PD disks.
>>
>> How do I measure disk latency?   I have a suspicion that a service is 
>> slow because of latency (my PD disks are operating well below their 
>> potential IOPS).
>>
>> iostat does not seem to be installed on the COS nodes. 
>>
>> Suggestions welcome
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to kubernetes-use...@googlegroups.com .
>> To post to this group, send email to kubernet...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Measuring disk latency on GKE

2018-02-01 Thread Warren Strange


Stackdriver will show me disk IOPS and throughput for PD disks.

How do I measure disk latency?   I have a suspicion that a service is slow 
because of latency (my PD disks are operating well below their potential 
IOPS).

iostat does not seem to be installed on the COS nodes. 

Suggestions welcome


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Is it possible to pool resources across hosts/nodes like VMware does

2018-02-14 Thread Warren Strange

AFAIK you can not split a pod between more than one node. 

I know nothing about VMware, but I am guessing they can split VM processes 
across nodes, which is pretty much equivalent to what Kubernetes does with 
pods (VM process == a pod, roughly speaking).



On Wednesday, February 14, 2018 at 8:04:30 PM UTC-7, chez wrote:
>
> Folks,
> Looks like VMware with vsphere (and vcenter?) is able to allocate 
> resources (vcpu for instance) across hosts for a single VM ? Is this 
> possible with kubernetes for containers ?
> Can kubernetes pool vcpu between multiple hosts/nodes for one container ?
>
> https://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.intro.doc_41/c_hosts_clusters_and_resource_pools.html
>
> I am really intrigued by this statement - 
> "You can dynamically change resource allocation policies. For example, at 
> year end, the workload on Accounting increases, and which requires an 
> increase in the Accounting resource pool reserve of 4GHz of power to 6GHz. 
> You can make the change to the resource pool dynamically without shutting 
> down the associated virtual machines."
>
> Each physical host is 4Ghz, but this doc says it can pull 2Ghz out of the 
> second host. Is it because of ESXi ?
>
> thanks
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.