Hello everyone
I was finally able to resolve the issue with the control plane.
The problem was caused by the master pod which was not able to connect to the
etcd pod because the hostname always resolved to 127.0.0.1 and not the local
cluster ip. This was due to the Vagrant box I used, and
Hi Rich!
What I understand from the description of your github issue is that
you're trying to achieve node names as their IP addresses when
specifically integrating with OpenStack.
My problem is that I don't want to integrate with OpenStack, but the
openshift_facts.py script from the 3.9
Are you hitting https://github.com/openshift/openshift-ansible/pull/9598 ?
On 10/9/18 11:25 AM, Dan Pungă wrote:
Thanks for the reply Scott!
I've used the release branches for both 3.9 and 3.10 of the openshift-ansible
project, yes.
I've initially checked the openshift_facts.py script flow in
Thanks for the reply Scott!
I've used the release branches for both 3.9 and 3.10 of the
openshift-ansible project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9
branch; now looking at the 3.10 version, I do see the change that you're
pointing.
On 09.10.2018
We have upgraded from the 3.6 reference architecture to the 3.9 aws
playbooks in openshift-ansible. There was quite a bit of work in getting
nodes ported into the scaling groups. We have upgraded our masters to 3.9
with the BYO playbooks but have not ported them to use scaling groups yet.
We'll
There is cloud formation templates as part of the 3.6 reference
architecture. But that is now deprecated. I’m using that template at a
client site and it worked fine (I’ve adapted it to work with 3.9 by using a
static inventory as we didn’t want to revisit our architecture from
scratch). We did