Re: [kubernetes-users] Can we create a Kubernetes China user channel on slack?

2017-12-20 Thread Du Jun
+1

2017-12-21 9:00 GMT+08:00 :

> There are thousands of kubernetes users in China but most of them separate
> from different wechat group. There is no place to make them together. I see
> there are up-users, fr-users and event channels etc. in slack. Can we
> create a cn-users and cn-events channel? I have no permission to create it,
> who can help me? And I am very pleased to operate this channel because of I
> have organized about 900 active kubernetes users in China.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Can we create a Kubernetes China user channel on slack?

2017-12-20 Thread rootsongjc
There are thousands of kubernetes users in China but most of them separate from 
different wechat group. There is no place to make them together. I see there 
are up-users, fr-users and event channels etc. in slack. Can we create a 
cn-users and cn-events channel? I have no permission to create it, who can help 
me? And I am very pleased to operate this channel because of I have organized 
about 900 active kubernetes users in China.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Unable to setup service DNS in Kubernetes cluster

2017-12-20 Thread akaashmalo
I am setting up DNS for Kubernetes services for the first time and I came 
across SkyDNS. So following documentation, my skydns-svc.yaml file is :

apiVersion: v1
kind: Service
spec:
  clusterIP: 10.100.0.100
  ports:
  - name: dns
port: 53
protocol: UDP
targetPort: 53
  - name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
  selector:
k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP


And my skydns-rc.yaml file is :

apiVersion: v1
kind: ReplicationController
spec:
  replicas: 1
  selector:
k8s-app: kube-dns
version: v18
  template:
metadata:
  creationTimestamp: null
  labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
  containers:
  - args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
  name: dns-local
  protocol: UDP
- containerPort: 10053
  name: dns-tcp-local
  protocol: TCP
resources:
  limits:
cpu: 100m
memory: 200Mi
  requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
  - args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
  name: dns
  protocol: UDP
- containerPort: 53
  name: dns-tcp
  protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
  - args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null 
&&
  nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
  protocol: TCP
resources:
  limits:
cpu: 10m
memory: 20Mi
  requests:
cpu: 10m
memory: 20Mi


Also on my minions, I updated the 
/etc/systemd/system/multi-user.target.wants/kubelet.service file and added the 
following under the ExecStart section :

ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :

[root@kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z63/3   
Running 0  6s
[root@kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns10.100.0.100 
53/UDP,53/TCP20m
This is all that I got from a config standpoint. Now in order to test my setup, 
I downloaded busybox and tested a nslookup

[root@kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes  10.100.0.1   443/TCP 

[root@kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server:10.100.0.100
Address 1: 10.100.0.100



Going through the logs, I see something that might explain why this is not 
working :

kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976   1 reflector.go:216] pkg/dns/dns.go:154: Failed to 
list *api.Endpoints: Get 
https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load 
system roots and no roots provided
E1220 17:44:48.487169   1 reflector.go:216] pkg/dns/dns.go:155: Failed to 
list *api.Service: Get 
https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load 
system roots and no roots provided
I1220 17:44:48.487716   1 dns.go:172] Ignoring error while waiting for 
service default/kubernetes: Get 
https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: 
failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311   1 reflector.go:216] pkg/dns/dns.go:154: Failed to 
list *api.Endpoints: Get 
https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load 
system roots and no roots provided
I1220 17:44:49.492338   1 dns.go:172] Ignoring error while waiting for 
service default/kubernetes: Get 

Re: [kubernetes-users] Re: GKE - HPA scaling based on custom metrics.

2017-12-20 Thread Bibin Wilson
Thanks Beata for the reply!

On Wed, Dec 20, 2017 at 8:32 PM, 'Beata Skiba' via Kubernetes user
discussion and Q  wrote:

> Custom Metrics are unfortunately not available on GKE 1.8.4, they will be
> available with 1.9.0
>
>
> On Thursday, December 14, 2017 at 4:59:16 AM UTC+1, bibinw...@gmail.com
> wrote:
>>
>> I am working on a use case where I have to scale my pods based on custom
>> metrics.
>>
>> Kubernetes Env: GKE 1.8.4 with alpha features enabled
>>
>> I am getting error in HPA. Here is output of HPA.
>>
>> Name:
>>  sample-tomcat-app-hpa
>> Namespace:
>> default
>> Labels:
>> Annotations:
>> CreationTimestamp:   Wed,
>> 13 Dec 2017 20:06:22 +0530
>> Reference:
>> Deployment/sample-tomcat-app
>> Metrics: (
>> current / target )
>>   "tomcat_threadpool_connectioncount" on pods:
>> unknown / 2
>>   "tomcat_threadpool_connectioncount" on Service/sample-tomcat-app:
>>  unknown / 2
>>   resource cpu on pods  (as a percentage of request):
>>  unknown / 50%
>> Min replicas:2
>> Max replicas:10
>> Conditions:
>>   Type   Status  Reason   Message
>>      --  --   ---
>>   AbleToScaleTrueSucceededGetScalethe HPA controller was able
>> to get the target's current scale
>>   ScalingActive  False   FailedGetPodsMetric  the HPA was unable to
>> compute the replica count: did not receive metrics for any ready pods
>> Events:
>>   Type ReasonAge   From
>> Message
>>    --  
>> ---
>>   Warning  FailedGetPodsMetric   14s (x8 over 3m)
>>  horizontal-pod-autoscaler  did not receive metrics for any ready pods
>>   Warning  FailedComputeMetricsReplicas  14s (x8 over 3m)
>>  horizontal-pod-autoscaler  failed to get pods metric value: did not
>> receive metrics for any ready pods
>>
>>
>> Here is the list of enabled API's
>>
>>
>> apiextensions.k8s.io/v1beta1
>> apiregistration.k8s.io/v1beta1
>> apps/v1beta1
>> apps/v1beta2
>> authentication.k8s.io/v1
>> authentication.k8s.io/v1beta1
>> authorization.k8s.io/v1
>> authorization.k8s.io/v1beta1
>> autoscaling/v1
>> autoscaling/v2beta1
>> batch/v1
>> batch/v1beta1
>> certificates.k8s.io/v1beta1
>> custom.metrics.k8s.io/v1beta1
>> extensions/v1beta1
>> monitoring.coreos.com/v1
>> networking.k8s.io/v1
>> policy/v1beta1
>> rbac.authorization.k8s.io/v1
>> rbac.authorization.k8s.io/v1beta1
>> storage.k8s.io/v1
>> storage.k8s.io/v1beta1
>>
>> I am not sure if --horizontal-pod-autoscaler-use-rest-clients=true is
>> set in the controller manager. As GKE is a managed kubernetes cluster, we
>> do not have an option to edit the master configurations. Is this option
>> enabled in latest clusters?
>>
>> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Kubernetes user discussion and Q" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/kubernetes-users/3hHvmK_5AjE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: GKE - HPA scaling based on custom metrics.

2017-12-20 Thread 'Beata Skiba' via Kubernetes user discussion and Q
Custom Metrics are unfortunately not available on GKE 1.8.4, they will be 
available with 1.9.0

On Thursday, December 14, 2017 at 4:59:16 AM UTC+1, bibinw...@gmail.com 
wrote:
>
> I am working on a use case where I have to scale my pods based on custom 
> metrics.
>
> Kubernetes Env: GKE 1.8.4 with alpha features enabled
>
> I am getting error in HPA. Here is output of HPA.
>
> Name:   
>  sample-tomcat-app-hpa
> Namespace:   
> default
> Labels:  
> Annotations: 
> CreationTimestamp:   Wed, 
> 13 Dec 2017 20:06:22 +0530
> Reference:   
> Deployment/sample-tomcat-app
> Metrics: ( 
> current / target )
>   "tomcat_threadpool_connectioncount" on pods:   
> unknown / 2
>   "tomcat_threadpool_connectioncount" on Service/sample-tomcat-app: 
>  unknown / 2
>   resource cpu on pods  (as a percentage of request):   
>  unknown / 50%
> Min replicas:2
> Max replicas:10
> Conditions:
>   Type   Status  Reason   Message
>      --  --   ---
>   AbleToScaleTrueSucceededGetScalethe HPA controller was able 
> to get the target's current scale
>   ScalingActive  False   FailedGetPodsMetric  the HPA was unable to 
> compute the replica count: did not receive metrics for any ready pods
> Events:
>   Type ReasonAge   From   
> Message
>    --     
> ---
>   Warning  FailedGetPodsMetric   14s (x8 over 3m) 
>  horizontal-pod-autoscaler  did not receive metrics for any ready pods
>   Warning  FailedComputeMetricsReplicas  14s (x8 over 3m) 
>  horizontal-pod-autoscaler  failed to get pods metric value: did not 
> receive metrics for any ready pods
>
>
> Here is the list of enabled API's
>
>
> apiextensions.k8s.io/v1beta1
> apiregistration.k8s.io/v1beta1
> apps/v1beta1
> apps/v1beta2
> authentication.k8s.io/v1
> authentication.k8s.io/v1beta1
> authorization.k8s.io/v1
> authorization.k8s.io/v1beta1
> autoscaling/v1
> autoscaling/v2beta1
> batch/v1
> batch/v1beta1
> certificates.k8s.io/v1beta1
> custom.metrics.k8s.io/v1beta1
> extensions/v1beta1
> monitoring.coreos.com/v1
> networking.k8s.io/v1
> policy/v1beta1
> rbac.authorization.k8s.io/v1
> rbac.authorization.k8s.io/v1beta1
> storage.k8s.io/v1
> storage.k8s.io/v1beta1
>
> I am not sure if --horizontal-pod-autoscaler-use-rest-clients=true is set 
> in the controller manager. As GKE is a managed kubernetes cluster, we do 
> not have an option to edit the master configurations. Is this option 
> enabled in latest clusters?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-12-20 Thread csalazar
On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
> The GKE team has heard the desire for this and is looking at possible
> ways to provide it.
> 
> On Wed, Aug 9, 2017 at 3:56 PM,   wrote:
> > On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
> >> Yes, this is the right approach -- here's a detailed walk-through:
> >>
> >> https://github.com/johnlabarge/gke-nat-example
> >>
> >> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
> >> > Hello, I've the same problem described there. I have a GKE cluster and I 
> >> > need to connect to an external service. I find the NAT solution is right 
> >> > for my needs, my cluster resizes automatically. @Paul Tiplady have you 
> >> > config the external NAT? Can you share your experiences? I tried 
> >> > following this guide 
> >> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
> >> >  but seems it doesn't work.
> >> >
> >> > Thanks,
> >> > Giorgio
> >> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha 
> >> > scritto:
> >> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> >> > > clusters often (as part of the node upgrade process), so I want a 
> >> > > solution that's going to handle that case automatically. The NAT 
> >> > > Gateway approach appears to be the best (only?) option that handles 
> >> > > all cases seamlessly at this point.
> >> > >
> >> > >
> >> > > I don't know in which cases a VM could be destroyed, I'd also be 
> >> > > interested in seeing an enumeration of those cases. I'm taking a 
> >> > > conservative stance as the consequences of dropping traffic through 
> >> > > changing source-IP is quite severe in my case, and because I want to 
> >> > > keep the process for upgrading the cluster as simple as possible.  
> >> > > From 
> >> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> >> > >  it sounds like VM termination should not be caused by planned 
> >> > > maintenance, but I assume it could be caused by unexpected failures in 
> >> > > the datacenter. It doesn't seem reckless to manually set the IPs as 
> >> > > part of the upgrade process as you're suggesting.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  
> >> > > wrote:
> >> > >
> >> > > Correct, but at least at the moment we aren't using auto-resizing, and 
> >> > > I've never seen nodes get removed without us manually taking some 
> >> > > action (e.g. upgrading Kubernetes releases or similar). Are there 
> >> > > automated events that can delete a VM and remove it, without us having 
> >> > > done something? Certainly I've observed machines rebooting, but that 
> >> > > also preserves dedicated IPs. I can live with having to take some 
> >> > > manual configuration action periodically, if we are changing something 
> >> > > with our cluster, but I would like to know if there is something I've 
> >> > > overlooked. Thanks!
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> >> > >
> >> > > The public IP is not stable in GKE. You can manually assign a static 
> >> > > IP to a GKE node, but then if the node goes away (e.g. your cluster 
> >> > > was resized) the IP will be detached, and you'll have to manually 
> >> > > reassign. I'd guess this is also true on an AWS managed equivalent 
> >> > > like CoreOS's CloudFormation scripts.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
> >> > > wrote:
> >> > >
> >> > > As Rodrigo described, we are using Container Engine. I haven't fully 
> >> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
> >> > > nodes, probably in their own Node Pool as part of the cluster. Those 
> >> > > are the IPs used by outbound connections from pods running those 
> >> > > nodes, if I recalling correctly from a previous experiment. Then I 
> >> > > will use Rodrigo's taint suggestion to schedule Pods on those nodes.
> >> > >
> >> > > If for whatever reason we need to remove those nodes from that pool, 
> >> > > or delete and recreate them, we can move the dedicated IP and taints 
> >> > > to new nodes, and the jobs should end up in the right place again.
> >> > >
> >> > >
> >> > > In short: I'm pretty sure this is going to solve our problem.
> >> > >
> >> > >
> >> > > Thanks!
> >
> > The approach of configuring a NAT works but it has 2 major drawbacks:
> >
> > 1. It creates a single point of failure (if the VM that runs the NAT fails)
> > 2. It's too complex!
> >
> > In my use case I don't need Auto-scaling enabled right now, so I think it's 
> > better to just change the IPs of the VMs to be static. Anyways in the 
> > future I know I will need this feature.
> >
> > Does somebody know if