I noticed that 3.7 was tagged released on Github today. Was there additional
work that was needed prior to the formal announcement of the release?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/list
I had LDAP auth working with Active Directory. I didn't like the id mapping
and decided to change it.
I wiped out the old identities from the system and did a restart of the master
service.
Now I cannot login. Reverted my change on id attribute and restarted. Still
cannot login. No errors an
If you installed with openshift-ansible (git or rpm) by default, you will get a
http service for auth.
It will contain no users. You will need to add users from the host command
line like
htpasswd /etc/origin/master/htpasswd
From: users-boun...@lists.openshift.redhat.com
[mailto:users-boun..
No service needs to be restarted.
You didn’t bother to say how you installed openshift.
You can verify that you have created a user by using the command line: ‘oc get
users’
If that doesn’t provide you the details, then there could be other things wrong
in your original setup.
From: users-boun..
level didn't provide any insight into the problem.
From: Brigman, Larry
Sent: Thursday, November 30, 2017 1:13 PM
To: 'users@lists.openshift.redhat.com'
Subject: login debug?
I had LDAP auth working with Active Directory. I didn't like the id mapping
and decided to change it.
Is there a good place to get a reasonable description of what the CIDR should
be for an external IPS when
you don't have control over the router or DNS?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftm
After I have upgraded from 3.6 to 3.7 it looked like things were working.
Yesterday, as I was exploring builds, I got an error where the build container
could not resolve any external hosts.
The troubleshooting guide recommended that restarting the docker daemon would
clear this error which it di
I have been experiencing DNS lookup failures. This is preventing production
deployment of Openshift.
I see it in two cases, lookup of a remote docker registry and lookup of a ldap
service. Both of these are not local to the server(s) in question but local to
internal DNS servers.
The ldap cas
The logging from dnsmasq was insightful. It looks like the lookups favor the
second server in the list.
In my case, the second server was for quick offsite lookups so it was failing
the local lookups.
From: Brigman, Larry
Sent: Thursday, February 22, 2018 3:57 PM
To: Clayton Coleman
Cc
Looks like CentOS has released an update to Docker. The playbooks want to say
that it should use it but there is another test that says it cannot use
anything > 1.12.
None of the variables allow over-riding this setting using the rpm packages.
The only way I found to get this working is to
I was trying to figure out how to install 3.9 alpha4
That repository is available but I don't have an example for the inventory to
use.
The repo has this URL:
https://cbs.centos.org/repos/paas7-openshift-origin39-testing/x86_64/os/
___
users mailing
Shouldn’t that last tag been an -rc0 instead of v3.9.0?
From: users-boun...@lists.openshift.redhat.com
[mailto:users-boun...@lists.openshift.redhat.com] On Behalf Of Clayton Coleman
Sent: Tuesday, March 27, 2018 3:44 PM
To: Troy Dawson
Cc: users ; The CentOS developers mailing list.
; dev
Subj
I configure one of our clusters to use LDAP against our AD.
Here is my line from the inventory (obsucated) but handling both local and LDAP:
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename':
During a recent install of Openshift we noticed a new version 3.7.1. The
install completed.
When we deployed our applications which use PVC for data storage. These mounts
would not complete.
They would continually time out.
Here is the error from the events
24m 24m 1 git-2
I can get Openshift 3.9 running with anything that openshift-ansible installs.
When I go to run my pods, I always get Pending on the deployment container.
I've tried it with master nodes and with infra nodes in my cluster.
Scheduling Failed 0/3 nodes available
The nodes all report ready.
What is t
false to true for my master nodes to allow the web-console pods to
run on the masters.
todd
Today's Topics:
2. getting pods running on 3.9 masters (Brigman, Larry)
Message: 2
Date: Fri, 4 May 2018 15:58:13 +
From: "Brigman, Larry"
To: use
I'm looking for the proper way to configure OpenShift HA without a LB.
The inventory file says it can be done but nothing I try actually gets
the cluster into a state that allows logins or API responses from
anything other than the first node the cluster.
Note: It is prompted by this comment
Running into an installation failure that seems to be network related but
cannot find the problem issue.
Running the installation on a single node with an all-in-one config.
The host has two networks.
Primary: 172.16.192.12/24
Secondary: 172.16.193.55/24
Docker is using standard networking: 172.1
The openshift-ansible install pulls from docker.io without any special
configuration required.
If you have a registry keys that you previously used for OSE, then removed
those and anything that references redhat registry.
From: users-boun...@lists.openshift.redhat.com
On Behalf Of Subhendu Gh
I'm having a problem with the internal OpenShift certs as they are about to
expire on multiple clusters that I have.
I run the redeploy_certs playbook with the following options:
-e openshift_redeploy_openshift_ca=true -e
openshift_master_bootstrap_auto_approve=true
It redeploys all of the cert
Why isn't there a newer openshift-ansible package for Centos 7?
The last one distributed is 3.11.37-1.
The tags in openshift-ansible git repo seems to indicate that 3.11.232-1 should
be available.
Any clues on who to contact to get the builds started again?
_
unning the same playbook from the git repo tagged [3.11.223-1]
Completes without errors but does not update the
/etc/origin/node/node.kubeconfig file
Openshift is version 3.11
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://okd.example.com:8443
o
We are building with some of the images that are provided as base example
images with from the OKD examples repo.
These builds are now failing due to changes in Docker's pull rate limits.
Are there any plans to move these base images to another registry?
__
Yes, Centos images but also those affecting OKD 3.11 installs.
From: Ben Parees
Sent: Monday, November 9, 2020 10:03 AM
To: Brigman, Larry ; Honza Horak
; Gabe Montero
Cc: OpenShift Users List
Subject: Re: Builds with public images failing
External (bpar
Is there a way to use quay.io to do a OKD 3.11 install?
Change the registry URL?
From: Brigman, Larry
Sent: Monday, November 9, 2020 10:13 AM
To: Ben Parees ; Honza Horak ; Gabe
Montero
Cc: OpenShift Users List
Subject: Re: Builds with public images failing
I understand that OKD 3.11 is old but we are still using it in production.
Is there a flag or something during the installation that can be used to allow
the nodes to work correctly with DHCP.
We have found that the IP address is embedded in the certs and when the IP
address of the nodes change d
certs from the OKD provider, if you are
>unable to associate them with all your ips statically.
>I haven't been working with this stuff in a while, but that is the high-level
>solution.
>Respectfully,
>Martes G Wigglesworth
>>- Original Message -
>&g
27 matches
Mail list logo