Hello everyone
I am having trouble getting a working Origin 3.10 installation using the
openshift-ansible installer. My install always fails because the control pane
pods are not available. I've checkout the release-3.10 branch from
openshift-ansible and configured the inventory accordingly
TASK [openshift_control_plane : Start and enable self-hosting node]
******************
changed: [master]
TASK [openshift_control_plane : Get node logs] *******************************
skipping: [master]
TASK [openshift_control_plane : debug]
******************************************
skipping: [master]
TASK [openshift_control_plane : fail]
*********************************************
skipping: [master]
TASK [openshift_control_plane : Wait for control plane pods to appear]
***************
failed: [master] (item=etcd) => {"attempts": 60, "changed": false, "item":
"etcd", "msg": {"cmd": "/bin/oc get pod master-etcd-master.vnet.de -o json -n
kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to
the server master.vnet.de:8443 was refused - did you specify the right host or
port?\n", "stdout": ""}}
TASK [openshift_control_plane : Report control plane errors]
*************************
fatal: [master]: FAILED! => {"changed": false, "msg": "Control plane pods
didn't come up"}
I am using Vagrant to setup a local domain (vnet.de) which also includes a
dnsmasq-node to have full control over the dns. The following VMs are running
and DNS ans SSH works as expected
Hostname IP
domain.vnet.de 192.168.60.100
master.vnet.de 192.168.60.150 (dns also works for openshift.vnet.de which is
configured as openshift_master_cluster_public_hostname) also runs etcd
infra.vnet.de 192.168.60.151 (openshift_master_default_subdomain
wildcard points to this node)
app1.vnet.de 192.168.60.152
app2.vnet.de 192.168.60.153
When connecting to the master-node I can see that several docker-instances are
up and running
[vagrant@master ~]$ sudo docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
9a0844123909 ff5dd2137a4f "/bin/sh -c
'#!/bi..." 19 minutes ago Up 19 minutes
k8s_etcd_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
41d803023b72 f216d84cdf54 "/bin/bash -c
'#!/..." 19 minutes ago Up 19 minutes
k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
044c9d12588c docker.io/openshift/origin-pod:v3.10.0 "/usr/bin/pod"
19 minutes ago Up 19 minutes
k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_0
10a197e394b3 docker.io/openshift/origin-pod:v3.10.0 "/usr/bin/pod"
19 minutes ago Up 19 minutes
k8s_POD_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
20f4f86bdd07 docker.io/openshift/origin-pod:v3.10.0 "/usr/bin/pod"
19 minutes ago Up 19 minutes
k8s_POD_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
However, there is no port 8443 open on the master-node. No wonder the
ansible-installer complains.
The machines are using a plain Centos 7.5 and I've run the
openshift-ansible/playbooks/prerequisites.yml first and then
openshift-ansible/playbooks/deploy_cluster.yml.
I've double-checked the installation documentation and my Vagrant config...all
looks correct.
Any ideas/advice?
regards
Marc
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users