> I am pretty sure that the node had network access to the master: I just added the masters SSH key to the node and was able to
I should have been more specific. It's actually connectivity from the api container to the respective node. This can especially occur if you're trying to do hacky things with name resolution. The 3.x installer tries to do the right thing, but can't account for every environment's setup. > How would I check for that if the problem occurs again? Log onto the host and check the logs for the service. If it's not starting, you should see a bunch of crash looping, probably complaining about not being able to connect to the api server to request the CSR in most cases, but it's possible to not start due to some misconfiguration of the initial kubelet config. I think it might be named origin-node, but I don't remember offhand what we called it. On Thu, Jun 27, 2019 at 8:41 AM Robert Dahlem <[email protected]> wrote: > > Michael, > > On 26.06.2019 22:57, Michael Gugino wrote: > > > If there are no pending CSRs, then either the kubelet did not start on > > the node, or the node does not have network access to the master to > > request a CSR. > > > > When the kubelet first starts, it requests a CSR for it's client cert. > > That cert needs to be approved before the node can join the cluster. > > After the node joins the cluster, it will issue a CSR for it's > > server-side cert. This cert is necessary for connecting to the node > > for reading logs from pods. This second CSR may report as failed if > > the master is not able to successfully verify it can read the node's > > server port. > > Thank you for these explanations. > > Meanwhile I reinstalled the node and tried again, this time everything > went smooth and I was able to add the node to the cluster. > > I am pretty sure that the node had network access to the master: I just > added the masters SSH key to the node and was able to > "ssh root@os-node2 uname -a" > from the master. > > Just for reference: the other possibility you mentioned was that the > kubelet did not start on the node. How would I check for that if the > problem occurs again? > > Kind regards, > Robert > > _______________________________________________ > users mailing list > [email protected] > http://lists.openshift.redhat.com/openshiftmm/listinfo/users -- Michael Gugino Senior Software Engineer - OpenShift [email protected] 540-846-0304 _______________________________________________ users mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/users
