Hello list.

i was stuck with an older version (1.4.9) of kubernetes for a while.
No i am able to upgrade again, but run in a problem, with the 
kube-control-manager in 1.8

my setup ist 3 ETCD3 server, 2 master and x worker all on coreos.
i using the imutable serverpattern and start my coreos maschiens with 
cloudconfig files.

to get a rolling upgrade i plan to first upgrade to 1.6 (with api-server 
storage-backend etcd2) and then to 1.8.

the fist step to 1.6 runs smoothe. installing a clean cluster with 1.8 working 
fine to. but trying a rolling upgrade from 1.6 to 1.8 results in a crashing 
controller-manager

my cloud config
   - name: kubelet.service
      command: start
      content: |
        [Service]
        ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
        Environment=KUBELET_IMAGE_TAG={{ conf['global']['k8s_version'] }}
        ExecStart=/usr/lib/coreos/kubelet-wrapper \
          --kubeconfig=/etc/kubernetes/kubeconfig.yaml \
          --require-kubeconfig \
          --register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
          --pod-manifest-path=/etc/kubernetes/manifests \
          --network-plugin-dir=/etc/kubernetes/cni/net.d \
          --network-plugin= \
          --allow-privileged=true \
          --cluster-dns={{ conf['global']['dns_service_ip'] }} \
          --cluster-domain=cluster.local
        Restart=always
        RestartSec=10
        [Install]
        WantedBy=multi-user.target

 - path: /etc/kubernetes/manifests/kube-controller-manager.yaml
    permissions: 0444
    content: |
      apiVersion: v1
      kind: Pod
      metadata:
        name: kube-controller-manager
        namespace: kube-system
      spec:
        hostNetwork: true
        containers:
        - name: kube-controller-manager
          image: coreos/hyperkube:{{ conf['global']['k8s_version'] }}
          command:
          - /hyperkube
          - controller-manager
          - --master=http://127.0.0.1:8080
          - --leader-elect=true
          - 
--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
          - --root-ca-file=/etc/kubernetes/ssl/ca.pem
          livenessProbe:
            httpGet:
              host: 127.0.0.1
              path: /healthz
              port: 10252
            initialDelaySeconds: 15
            timeoutSeconds: 1
          volumeMounts:
          - mountPath: /etc/kubernetes/ssl
            name: ssl-certs-kubernetes
            readOnly: true
          - mountPath: /etc/ssl/certs
            name: ssl-certs-host
            readOnly: true
        volumes:
        - hostPath:
            path: /etc/kubernetes/ssl
          name: ssl-certs-kubernetes
        - hostPath:
            path: /usr/share/ca-certificates
          name: ssl-certs-host


my only diffrence was the TAG

here are the  error when switching to new master:

I0104 09:22:30.393847       1 leaderelection.go:184] successfully acquired 
lease kube-system/kube-controller-manager
I0104 09:22:30.394134       1 event.go:218] 
Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", 
Name:"kube-controller-manager", UID:"2ee5e02e-f11a-11e7-86d1-fa163eb8726b", 
APIVersion:"v1", ResourceVersion:"190862", FieldPath:""}): type: 'Normal' 
reason: 'LeaderElection' 10infra9001-kubernetes-master-1.10infra9001.xxx.de 
became leader
I0104 09:22:32.404812       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
E0104 09:22:33.396983       1 controllermanager.go:384] Server isn't healthy 
yet.  Waiting a little while.
I0104 09:22:34.413617       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
I0104 09:22:36.421021       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
E0104 09:22:37.399823       1 controllermanager.go:384] Server isn't healthy 
yet.  Waiting a little while.
I0104 09:22:38.428537       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
I0104 09:22:40.438300       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
E0104 09:22:41.399638       1 controllermanager.go:384] Server isn't healthy 
yet.  Waiting a little while.
I0104 09:22:42.448051       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
I0104 09:22:44.456347       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
E0104 09:22:45.399325       1 controllermanager.go:384] Server isn't healthy 
yet.  Waiting a little while.
I0104 09:22:46.476148       1 leaderelection.go:199] successfully renewed lease 
kube-system/kube-controller-manager
E0104 09:22:48.401321       1 controllermanager.go:384] Server isn't healthy 
yet.  Waiting a little while.
F0104 09:22:48.401439       1 controllermanager.go:151] error building 
controller context: failed to get api versions from server: : timed out waiting 
for the condition


already with -v=4

anyone an idea what the problem is?

thanks mark

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to