rhtyd commented on issue #4146: URL: https://github.com/apache/cloudstack/issues/4146#issuecomment-752890499
@weizhouapache I tested and found that the isolated/network should allow egress (public) internet to work with ISOs from http://download.cloudstack.org/cks - check and see if you're hitting the same. @davidjumani has fixed the issue here https://github.com/apache/cloudstack/pull/4459 but we haven't pushed newer ISOs (which we'll try to update soon). Commentary/notes: I followed the docs (http://docs.cloudstack.apache.org/en/latest/plugins/cloudstack-kubernetes-service.html) and enabled global settings and set up the CoreOS template, then created a CKS cluster with k8s v1.16.0 (1 worker node + 1 master node with 2GB ram 2vCPUs) on a KVM advanced zone env with shared storage on a pre-created isolated network, I saw the following when you deploy the cluster: ``` 2020-12-31 07:56:45,638 WARN [c.c.k.c.u.KubernetesClusterUtil] (API-Job-Executor-2:ctx-7051112a job-3394 ctx-c601630a) (logid:581f7bce) API endpoint for Kubernetes cluster : cks1-ry not available javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake at java.base/sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1588) at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1416) ``` After the nodes are up and kubeadm is able to initialise them, I see this in logs: ``` 2020-12-31 07:57:15,701 INFO [c.c.k.c.u.KubernetesClusterUtil] (API-Job-Executor-2:ctx-7051112a job-3394 ctx-c601630a) (logid:581f7bce) Kubernetes cluster : cks1-ry API has been successfully provisioned, { "major": "1", "minor": "16", "gitVersion": "v1.16.0", "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", "gitTreeState": "clean", "buildDate": "2019-09-18T14:27:17Z", "goVersion": "go1.12.9", "compiler": "gc", "platform": "linux/amd64" } ``` And after some time I see: ``` 2020-12-31 07:59:50,170 DEBUG [c.c.k.c.u.KubernetesClusterUtil] (API-Job-Executor-2:ctx-7051112a job-3394 ctx-c601630a) (logid:581f7bce) Checking ready nodes for the Kubernetes cluster : cks1-ry with total 2 provisioned nodes 2020-12-31 07:59:50,543 DEBUG [c.c.k.c.u.KubernetesClusterUtil] (API-Job-Executor-2:ctx-7051112a job-3394 ctx-c601630a) (logid:581f7bce) Kubernetes cluster : cks1-ry has total 2 provisioned nodes while 0 ready now ``` This continued for a while, then I debugged to find that the nodes were unable to fetch container images, I saw: ``` $ sudo ./kubectl get nodes NAME STATUS ROLES AGE VERSION cks1-ry-master NotReady master 12m v1.16.0 cks1-ry-node-1 NotReady <none> 12m v1.16.0 $ sudo ./kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5644d7b6d9-7xcxj 0/1 Pending 0 13m <none> <none> <none> <none> coredns-5644d7b6d9-b8twq 0/1 Pending 0 13m <none> <none> <none> <none> etcd-cks1-ry-master 1/1 Running 0 12m 10.1.1.150 cks1-ry-master <none> <none> kube-apiserver-cks1-ry-master 1/1 Running 0 12m 10.1.1.150 cks1-ry-master <none> <none> kube-controller-manager-cks1-ry-master 1/1 Running 0 12m 10.1.1.150 cks1-ry-master <none> <none> kube-proxy-bwmhj 1/1 Running 0 13m 10.1.1.35 cks1-ry-node-1 <none> <none> kube-proxy-tkjbp 1/1 Running 0 13m 10.1.1.150 cks1-ry-master <none> <none> kube-scheduler-cks1-ry-master 1/1 Running 0 12m 10.1.1.150 cks1-ry-master <none> <none> weave-net-g6d9l 0/2 ImagePullBackOff 0 13m 10.1.1.150 cks1-ry-master <none> <none> weave-net-q4ft5 0/2 ErrImagePull 0 13m 10.1.1.35 cks1-ry-node-1 <none> <none> ``` I checked and added egress allow rules and then manually pulled images which fixed that issue: ``` docker pull docker.io/weaveworks/weave-kube:2.7.0 docker pull docker.io/weaveworks/weave-npc:2.7.0 ``` After this the cluster came up and I was able to do basic tests using kubectl and use k8s dashboard via proxy. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
