A little more on this.
One the nodes that are not working the file
/etc/cni/net.d/80-openshift-network.conf is not present.
This seems to cause errors like this in the origin-node service:
Mar 14 18:21:45 zzz-infra.openstacklocal origin-node[17833]: W0314
18:21:45.711715 17833 cni.go:189] Un
Hello,
It's time for our weekly PaaS SIG sync-up meeting
Time: 1700 UTC - Wedensdays (date -d "1700 UTC")
Date: Today Wedensday, 14 March 2018
Where: IRC- Freenode - #centos-devel
For those in the United States, remember that we are using UTC time,
and so the time is an hour later than it was las
HI Alfredo,
I set this by installing origin-docker-excluder with yum. Excluder adds that
line below to /etc/yum.conf. You can then edit the line, manually if you want,
and put in 1.13 to be excluded, then yum will not try to install it.
Of course, I also did work around by downgrade ‘ansible -
You could edit the
openshift-ansible\playbooks\common\openshift-node\restart.yml and add:
max_fail_percentage: 0
under
serial: "{{ openshift_restart_nodes_serial | default(1) }}"
That, in theory, should make it fail straight away.
On Wed, Mar 14, 2018 at 9:46 PM Alan Christie <
achris...@infor
Digging deeper, it seems my problem is related to this issue
https://github.com/openshift/openshift-ansible/issues/6693
Von:marc.schle...@sdv-it.de
An: users@lists.openshift.redhat.com
Datum: 14.03.2018 11:27
Betreff:Re: Re: Re: Check if template service broker is running
Ges
Hi,
I’ve been running the Ansible release-3.7 branch playbook and occasionally I
get errors restarting nodes. I’m not looking for help on why my nodes are not
restarting but I am curious as to why the playbook continues when there are
fatal errors that eventually lead to a failure some 30 minut
I am trying another fresh install just now, and my ansible script is
hanging for 15 minutes at
TASK [openshift_service_catalog : wait for api server to be ready]
This was the same the last times I tried.
I did a minor adjustment to the ansible-script by adding the following
options
openshift_e
>
> We’re seeing this same issue on new install of 3.7 or on upgrade from 3.6
> to 3.7. We tried excluder for docker, but version in centos 3.7 repo only
> excludes to 1.14. You can manually add 1.13 to this list.
>
> exclude= docker*1.20* docker*1.19* docker*1.18* docker*1.17*
> docker*1.16*
Thats what I get from my console.
Logged into "https://openshift.vnet.de:8443"; as "system:admin" using
existing
credentials.
You have access to the following projects and can switch between them with
'oc
project ':
* default
kube-public
kube-system
logging
management-inf