ahmadamirahmadi1401 commented on issue #10948:
URL: https://github.com/apache/cloudstack/issues/10948#issuecomment-2953703032

   Why does the middle log say the upgrade was successful, but the next log 
says the upgrade failed?
   
   What do you think is the problem? Do you have a solution to suggest?
   
   2025-06-08 07:14:30,886 ERROR [c.c.u.s.SshHelper] 
(API-Job-Executor-6:ctx-559bc6d5 job-33225 ctx-3b4cbb1d) (logid:33c83a6c) SSH 
execution of command sudo ./upgrade-kubernetes.sh4071027720196768380.sh 1.30.13 
true false false has an error status code in return. Result output: Installing 
binaries from /mnt/k8sdisk/
   unpacking quay.io/apalia/cloudstack-csi-driver:0.0.2 
(sha256:6f38051b27964da06af1cbcf1c759b4eb5266bdd02033f07c3d2e04630e2893f)...done
   unpacking ghcr.io/leaseweb/cloudstack-csi-driver:0.8.1 
(sha256:7a6cf3ba95be182ee8c991dced373f56faacbfac9a719056586d98af3aa932c5)...done
   unpacking docker.io/apache/cloudstack-kubernetes-autoscaler:latest 
(sha256:1c8a22c342daa5884f622f078be70ff913599aada0c4f859fe42ed28413afe98)...done
   unpacking docker.io/apache/cloudstack-kubernetes-provider:v1.1.0 
(sha256:10c058968e7d2f8e55da5976336a42d88b057e75880f000d3120aa2d75649e97)...done
   unpacking registry.k8s.io/coredns/coredns:v1.11.3 
(sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e)...done
   unpacking k8s.gcr.io/sig-storage/csi-attacher:v3.0.2 
(sha256:6f80b12657a7e0a5c683b24e806c4bbbe33a43e39b041fe9b7514d665d478ea4)...done
   unpacking registry.k8s.io/sig-storage/csi-attacher:v4.6.1 
(sha256:b4d611100ece2f9bc980d1cb19c2285b8868da261e3b1ee8f45448ab5512ab94)...done
   unpacking k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 
(sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108)...done
   unpacking registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 
(sha256:f25af73ee708ff9c82595ae99493cdef9295bd96953366cddf36305f82555dac)...done
   unpacking k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4 
(sha256:bec571992d40203edcd056ac0b0d97003887ee5e4be144c41932d18639673b03)...done
   unpacking registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 
(sha256:405a14e1aa702f7ea133cea459e8395fe40a6125c088c55569e696d48e1bd385)...done
   unpacking registry.k8s.io/sig-storage/csi-resizer:v1.11.1 
(sha256:a541e6cc2d8b011bb21b1d4ffec6b090e85270cce6276ee302d86153eec0af43)...done
   unpacking docker.io/kubernetesui/dashboard:v2.7.0 
(sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93)...done
   unpacking registry.k8s.io/etcd:3.5.15-0 
(sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a)...done
   unpacking registry.k8s.io/kube-apiserver:v1.30.13 
(sha256:bd68d81c20ad5781adec9f6eae24c83d6f66c3afc0d4baa32d3f2d865c82d436)...done
   unpacking registry.k8s.io/kube-controller-manager:v1.30.13 
(sha256:b85a5d785cc00b03613f67a7afca1e91b2a0ccf4ced8e69d38bda5f686980f3f)...done
   unpacking registry.k8s.io/kube-proxy:v1.30.13 
(sha256:f68590d1921db1f9a9b56898a66cf3c67fcdcb5be132f7640af8ac1297371e4a)...done
   unpacking registry.k8s.io/kube-scheduler:v1.30.13 
(sha256:bfa5f10dc3e14316785f23f8e1f4e4dbc556b033270d91185fca0577353cad33)...done
   unpacking registry.k8s.io/sig-storage/livenessprobe:v2.12.0 
(sha256:5baeb4a6d7d517434292758928bb33efc6397368cbb48c8a4cf29496abf4e987)...done
   unpacking docker.io/kubernetesui/metrics-scraper:v1.0.8 
(sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c)...done
   unpacking registry.k8s.io/pause:3.9 
(sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097)...done
   unpacking registry.mainxcloud.com/weaveworks/weave-kube:latest 
(sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea)...done
   unpacking registry.mainxcloud.com/weaveworks/weave-npc:latest 
(sha256:062832fd25b5e9e16650e618f26bba1409a7b3bf2c3903e1b369d788abc63aef)...done
   registry.k8s.io/pause:3.9
   [preflight] Running pre-flight checks.
   [upgrade/config] Reading configuration from the cluster...
   [upgrade/config] FYI: You can look at this config file with 'kubectl -n 
kube-system get cm kubeadm-config -o yaml'
   [upgrade] Running cluster health checks
   [upgrade/version] You have chosen to change the cluster version to "v1.30.13"
   [upgrade/versions] Cluster version: v1.30.13
   [upgrade/versions] kubeadm version: v1.30.13
   [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
   [upgrade/prepull] This might take a minute or two, depending on the speed of 
your internet connection
   [upgrade/prepull] You can also perform this action in beforehand using 
'kubeadm config images pull'
   [upgrade/apply] Upgrading your Static Pod-hosted control plane to version 
"v1.30.13" (timeout: 5m0s)...
   [upgrade/etcd] Upgrading to TLS for etcd
   [upgrade/staticpods] Preparing for "etcd" upgrade
   [upgrade/staticpods] Current and new manifests of etcd are equal, skipping 
upgrade
   [upgrade/etcd] Waiting for etcd to become available
   [upgrade/staticpods] Writing new Static Pod manifests to 
"/etc/kubernetes/tmp/kubeadm-upgraded-manifests4014411918"
   [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
   [upgrade/staticpods] Current and new manifests of kube-apiserver are equal, 
skipping upgrade
   [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
   [upgrade/staticpods] Current and new manifests of kube-controller-manager 
are equal, skipping upgrade
   [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
   [upgrade/staticpods] Current and new manifests of kube-scheduler are equal, 
skipping upgrade
   [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" 
in the "kube-system" Namespace
   [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system 
with the configuration for the kubelets in the cluster
   [upgrade] Backing up kubelet config file to 
/etc/kubernetes/tmp/kubeadm-kubelet-config1007267359/config.yaml
   [kubelet-start] Writing kubelet configuration to file 
"/var/lib/kubelet/config.yaml"
   [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to 
get nodes
   [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to 
post CSRs in order for nodes to get long term certificate credentials
   [bootstrap-token] Configured RBAC rules to allow the csrapprover controller 
automatically approve CSRs from a Node Bootstrap Token
   [bootstrap-token] Configured RBAC rules to allow certificate rotation for 
all node client certificates in the cluster
   [addons] Applied essential addon: CoreDNS
   [addons] Applied essential addon: kube-proxy
   [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.13". Enjoy!
   [upgrade/kubelet] Now that your control plane is upgraded, please proceed 
with upgrading your kubelets if you haven't already done so.
   serviceaccount/weave-net unchanged
   clusterrole.rbac.authorization.k8s.io/weave-net unchanged
   clusterrolebinding.rbac.authorization.k8s.io/weave-net unchanged
   role.rbac.authorization.k8s.io/weave-net unchanged
   rolebinding.rbac.authorization.k8s.io/weave-net unchanged
   daemonset.apps/weave-net configured
   serviceaccount/cloudstack-csi-controller unchanged
   clusterrole.rbac.authorization.k8s.io/cloudstack-csi-controller-role 
unchanged
   
clusterrolebinding.rbac.authorization.k8s.io/cloudstack-csi-controller-binding 
unchanged
   csidriver.storage.k8s.io/csi.cloudstack.apache.org unchanged
   deployment.apps/cloudstack-csi-controller unchanged
   daemonset.apps/cloudstack-csi-node unchanged
   mount: /mnt/k8sdisk: /dev/sr0 already mounted on /mnt/k8sdisk.
   
   
   
   2025-06-08 07:13:42,139 ERROR [c.c.k.c.a.KubernetesClusterActionWorker] 
(API-Job-Executor-6:ctx-559bc6d5 job-33225 ctx-3b4cbb1d) (logid:33c83a6c) 
Failed to upgrade Kubernetes cluster : c, unable to upgrade Kubernetes node on 
VM : c-control-1974e576a15, retries left: 2
   @weizhouapache  @DaanHoogland 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to