shwstppr commented on issue #7504:
URL: https://github.com/apache/cloudstack/issues/7504#issuecomment-1541996916

   @tuanhoangth1603 
   This is how cluster and scaling work:
   - Cluster is deployed with `n` size. CloudStack will deploy 1 (default) 
control node and n worker nodes. For the public IP it will create a firewall 
rule to open ports 2222 to 2222+n. It will also set up port-forwarding for 
2222+x to port 22 for each of the node VMs. (2222 will be forwarded to the 
control node, 2223 to the 1st worker node, 2224 to the second, and so on...). 
It will also open port 6443 and add a corresponding port-forwarding rule for 
the control node on the same port.
   - When you scale UP, CloudStack will deploy new required worker node VMs, 
old firewall is deleted and a new firewall rule is created for 2222 to 
2222+newsize. Port forwarding rules are set up as explained in the first point.
   
   Now, for you, ACS failed to delete and create the new firewall rule (I'm not 
sure why but it could be the additional instance that you mentioned). If you 
can give the output of listKubernetesClusters API, screenshots of the cluster 
instances list, firewall rules, and portwarding rules for the cluster public IP 
we can have a deeper look.
   
   As you say, k8s part is working fine. So what I suggested doing is,
   - make sure firewall rules are provisioned correctly, ie, port 6443 and 2222 
to 2222+clustersize
   - make sure port-forwarding is provisioned correctly as explained above in 
point 1
   - Wait for KubernetesClusterStatusScanner for 5 mins. It runs every 5 mins. 
There is no way to manually run it. And it should fix the cluster state. This 
shouldn't require stop-starting the cluster


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to