I've managed to proceed with my deployment. I've changed my /etc/ansible/hosts
file and my deployment was successful.
In ca se someone encounters similar issues, below it is the /etc/ansible/hosts
file I have used:
[OSEv3:children]
masters
etcd
nodes
[OSEv3:vars]
openshift_master_default_subdomain=apps.mydomain.com
ansible_ssh_user=root
openshift_release=v3.7.0
openshift_image_tag=v3.7.0
openshift_pkg_version=-3.7.0
openshift_master_cluster_method=native
openshift_master_cluster_hostname=internal-master-centos.mydomain.com
openshift_master_cluster_public_hostname=master-centos.mydomain.com
openshift_deployment_type=origin
openshift_disable_check=memory_availability,disk_availability,docker_image_availability
containerized=true
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename':
'/etc/origin/htpasswd'}]
[masters]
<Master IP Address>
[etcd]
<Master IP Address>
[nodes]
<Master IP Address>
<Slave IP Address> openshift_node_labels="{'region':'infra','zone':'default'}"
Thanks,
Anda
From: [email protected]
[mailto:[email protected]] On Behalf Of Anda Nicolae
Sent: Friday, March 2, 2018 12:22 PM
To: [email protected]
Subject: Error response from daemon: No such container: origin-node when
Deploying OpenShift Origin v3.7.0 on CentOS 7
Hi all,
I have a CentOS 7 server and I have created 2 virtual machines (VMS) on this
server. CentOS 7 is also the OS that runs on the VMs.
I am trying to deploy OpenShift Origin v3.7.0 on these 2 VMs (1st VM is the
master and the 2nd VM is the node)
I have run the following commands:
ansible-playbook -i /etc/ansible/hosts
/root/openshift-ansible/playbooks/prerequisites.yml
which successfully finishes, and:
ansible-playbook -vvv -i /etc/ansible/hosts
/root/openshift-ansible/playbooks/deploy_cluster.yml
which fails with the following error:
fatal: [<Master IP Address>]: FAILED! => {
"attempts": 3,
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": null,
"masked": null,
"name": "origin-node",
"no_block": false,
"state": "restarted",
"user": false
}
}
}
MSG:
Unable to restart service origin-node: Job for origin-node.service failed
because the control process exited with error code. See "systemctl status
origin-node.service" and "journalctl -xe" for de
tails.
META: ran handlers
to retry, use: --limit
@/root/openshift-ansible/playbooks/deploy_cluster.retry
PLAY RECAP
***************************************************************************************
<Master IP Address> : ok=407 changed=75 unreachable=0
failed=1
<Node IP Address> : ok=53 changed=6 unreachable=0
failed=0
localhost : ok=13 changed=0
unreachable=0 failed=0
>From journalctl -xe, I have:
umounthook <error>: c0be9a269c90: Failed to read directory
/usr/share/oci-umount/oci-umount.d: No such file or directory
master-centos.mydomain.com origin-node[49477]: Error response from daemon: No
such container: origin-node
docker ps shows:
[root@master-centos ~]# docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
6e36d7f538da openshift/origin:v3.7.1
"/usr/bin/openshift s" 29 minutes ago Up 29 minutes
origin-master-controllers
2c4136affcd5 openshift/origin:v3.7.1
"/usr/bin/openshift s" 29 minutes ago Up 29 minutes
origin-master-api
308a746e77f5 registry.fedoraproject.org/latest/etcd "/usr/bin/etcd"
29 minutes ago Up 29 minutes etcd_container
ef9f2c58c8f7 openshift/openvswitch:v3.7.1
"/usr/local/bin/ovs-r" 29 minutes ago Up 29 minutes
Below is my hosts file:
cat /etc/ansible/hosts
masters
etcd
nodes
[OSEv3:vars]
openshift_master_default_subdomain=apps.mydomain.com
ansible_ssh_user=root
ansible_become=yes
containerized=true
openshift_release=v3.7
openshift_image_tag=v3.7.1
openshift_pkg_version=-3.7.0
openshift_master_cluster_method=native
openshift_master_cluster_hostname=internal-master-centos.mydomain.com
openshift_master_cluster_public_hostname=master-centos.mydomain.com
deployment_type=origin
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_master_overwrite_named_certificates=true
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename':
'/etc/origin/htpasswd'}]
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'
openshift_master_api_port=443
openshift_master_console_port=443
openshift_disable_check=memory_availability,disk_availability,docker_image_availability
[masters]
<Master IP Address> openshift_hostname=master-centos.mydomain.com
[etcd]
<Master IP Address> openshift_hostname=master-centos.mydomain.com
[nodes]
<Master IP Address> openshift_hostname=master-centos.mydomain.com
openshift_schedulable=false
<Node IP Address> openshift_hostname=slave-centos.mydomain.com
openshift_node_labels="{'router':'true','registry':'true'}"
Do you have any idea why my deployment fails?
Thanks,
Anda
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users