On Thursday, October 12, 2017 at 6:53:28 PM UTC+2, paolo.m...@sparkfabrik.com 
wrote:
> On Thursday, October 12, 2017 at 6:49:01 PM UTC+2, paolo.m...@sparkfabrik.com 
> wrote:
> > We are experiencing networking problems over an entire pool of nodes, they 
> > seems to have network issues causing also the kube-proxy to fail, the nodes 
> > are freshly provisioned as they are configured to autoscale, i've also 
> > tried to delete the entire pool but the problem persists, also for new 
> > pools.
> > 
> > 
> > Kubectl output:
> > 
> > kube-system            
> > kube-proxy-gke-spark-op-services-gitlab-ci-build-fb120c5e-q331   0/1       
> > Init:ImagePullBackOff   0          18m
> > 
> > The issue is quite urgent.
> 
> 
> What happens is that the docker containers provisioned on the nodes (by 
> gitlab in this case) suffer of networking issues, i suppose because the 
> kube-proxy is missing.

The kubectl output

spark-k8s-services git:dev ❯ kubectl logs -f 
kube-proxy-gke-spark-op-services-gitlab-ci-build-fb120c5e-tct7 -nkube-system    
                                                                                
 
Error from server (BadRequest): container "kube-proxy" in pod 
"kube-proxy-gke-spark-op-services-gitlab-ci-build-fb120c5e-tct7" is waiting to 
start: PodInitializing


spark-k8s-services git:dev ❯ kubectl describe pod 
kube-proxy-gke-spark-op-services-gitlab-ci-build-fb120c5e-tct7 -nkube-system    
                                                                          
Name:           kube-proxy-gke-spark-op-services-gitlab-ci-build-fb120c5e-tct7
Namespace:      kube-system
Node:           gke-spark-op-services-gitlab-ci-build-fb120c5e-tct7/10.132.0.8
Start Time:     Thu, 12 Oct 2017 18:52:22 +0200
Labels:         component=kube-proxy
                tier=node
Annotations:    kubernetes.io/config.hash=e1aba4d0cdf8ee0f1b89b21ea6b46704
                kubernetes.io/config.mirror=e1aba4d0cdf8ee0f1b89b21ea6b46704
                kubernetes.io/config.seen=2017-10-12T16:52:17.72294518Z
                kubernetes.io/config.source=file
                scheduler.alpha.kubernetes.io/critical-pod=
Status:         Pending
IP:             10.132.0.8
Init Containers:
  touch-lock:
    Container ID:
    Image:              busybox
    Image ID:
    Port:               <none>
    Command:
      /bin/touch
      /run/xtables.lock
    State:              Waiting
      Reason:           ImagePullBackOff
    Ready:              False
    Restart Count:      0
    Environment:        <none>
    Mounts:
      /run from run (rw)
Containers:
  kube-proxy:
    Container ID:
    Image:              gcr.io/google_containers/kube-proxy:v1.7.6
    Image ID:
    Port:               <none>
    Command:
      /bin/sh
      -c
      echo -998 > /proc/$$$/oom_score_adj && kube-proxy 
--master=https://104.199.47.251 --kubeconfig=/var/lib/kube-proxy/kubeconfig 
--cluster-cidr=10.0.0.0/14 --resource-container="" --v=2 
--feature-gates=ExperimentalCriticalPodAnnotation=true 
--iptables-sync-period=1m --iptables-min-sync-period=10s 
1>>/var/log/kube-proxy.log 2>&1
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Requests:
      cpu:              100m
    Environment:        <none>
    Mounts:
      /etc/ssl/certs from etc-ssl-certs (ro)
      /run/xtables.lock from iptableslock (rw)
      /usr/share/ca-certificates from usr-ca-certs (ro)
      /var/lib/kube-proxy/kubeconfig from kubeconfig (rw)
      /var/log from varlog (rw)
Conditions:
  Type          Status
  Initialized   False
  Ready         False
  PodScheduled  True
Volumes:
  usr-ca-certs:
    Type:       HostPath (bare host directory volume)
    Path:       /usr/share/ca-certificates
  etc-ssl-certs:
    Type:       HostPath (bare host directory volume)
    Path:       /etc/ssl/certs
  kubeconfig:
    Type:       HostPath (bare host directory volume)
    Path:       /var/lib/kube-proxy/kubeconfig
  varlog:
    Type:       HostPath (bare host directory volume)
    Path:       /var/log
  run:
    Type:       HostPath (bare host directory volume)
    Path:       /run
  iptableslock:
    Type:       HostPath (bare host directory volume)
    Path:       /run/xtables.lock
QoS Class:      Burstable
Node-Selectors: <none>
Tolerations:    :NoExecute
Events:         <none>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to