Hi,

Can you try logging in to the control and worker node(s) and check if 
deploy-kube-system service ran successfully. If not, please share any error 
logs you see in either /var/log/cloud-init-output.log

You can ssh into the control node using:
ssh -i <ssh key> -p 2222 cloud@<public_ip>

You can ssh into the worker nodes using the same command as above but use port 
numbers 2223 onward.

If you associated the k8s cluster with an ssh key use the corresponding private 
key, else use the ssh key of the management server found at ~cloud/.ssh/id_rsa.
To check the service ran properly first change to root user : sudo -i
Then run: systemctl status deploy-kube-system


Thanks,
Pearl



________________________________
From: Stanley Burkee <stanley.bur...@gmail.com>
Sent: September 21, 2023 8:11 AM
To: us...@cloudstack.apache.org <us...@cloudstack.apache.org>; 
dev@cloudstack.apache.org <dev@cloudstack.apache.org>
Subject: cloud stack 4.18.1 Kubernetes cluster goes into an error state

Hi Guys,

We are experiencing an issue with the Kubernetes cluster. Whenever a new
Kubernetes cluster is provisioned, it goes into an error state giving an
error that the desired number of nodes are in the ready state. But I can
see both control & worker nodes in a running state.

Screenshot links are given below for your reference.
https://drive.google.com/file/d/1QfRwl7W1rYrRlLDdFv7p14OG4AW4qEEj/view?usp=sharing
https://drive.google.com/file/d/1R6GP7qotBAV_X551aQVBh1ipCCThOahW/view?usp=sharing

I am using Cloudstack 4.18.1 KVM (Rocky Linux 8). And i have tested the
Kubernetes cluster with the following ISOs and I am getting the same error.
1.23.3
1.26.6
1.27.3
1.24.0

Thanks a lot in advance guys.

Best regards

Stanley Burkee

 

Reply via email to