Re: Re-configure openshift cluster using ansible
Thanks. But when I need to change things that a bit more like different hostnames or certificates, what should I run? Upgrade? > On 22 Nov 2017, at 2:32, Joel Pearsonwrote: > > For reference what you're after is: > > openshift_disable_check=disk_availability > >> On Wed, Nov 22, 2017 at 5:05 AM Scott Dodson wrote: >> It really depends on the configuration changes you want to make whether or >> not you can simply re-run config.yml and get what you're looking for. Things >> like hostnames that get placed in certs and certain network configuration >> such as services and cluster CIDR ranges are immutable and cannot be changed >> via the installer. >> >> As far as the health check goes, you should be able to disable any health >> check by setting the variable that's emitted in the error message. >> >>> On Tue, Nov 21, 2017 at 11:25 AM, Alon Zusman wrote: >>> Hello, >>> I could not figure out how I can change the inventory file for new >>> configurations and then Re-configure my current cluster. >>> >>> Whenever I re run the configure.yml in the byo folder, it checks again the >>> minimal requirements and my /var is already less than 40G after the >>> installation. >>> >>> Thanks. >>> >>> ___ >>> users mailing list >>> users@lists.openshift.redhat.com >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >> >> ___ >> users mailing list >> users@lists.openshift.redhat.com >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Re-configure openshift cluster using ansible
For reference what you're after is: openshift_disable_check=disk_availability On Wed, Nov 22, 2017 at 5:05 AM Scott Dodsonwrote: > It really depends on the configuration changes you want to make whether or > not you can simply re-run config.yml and get what you're looking for. > Things like hostnames that get placed in certs and certain network > configuration such as services and cluster CIDR ranges are immutable and > cannot be changed via the installer. > > As far as the health check goes, you should be able to disable any health > check by setting the variable that's emitted in the error message. > > On Tue, Nov 21, 2017 at 11:25 AM, Alon Zusman > wrote: > >> Hello, >> I could not figure out how I can change the inventory file for new >> configurations and then Re-configure my current cluster. >> >> Whenever I re run the configure.yml in the byo folder, it checks again >> the minimal requirements and my /var is already less than 40G after the >> installation. >> >> Thanks. >> >> ___ >> users mailing list >> users@lists.openshift.redhat.com >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >> > > ___ > users mailing list > users@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: OpenShift registry behind registry auth issues
After some internal discussion we've decided that we'll fix this by adding the service ip address to the default NO_PROXY list. Follow the bug for any further updates. On Tue, Nov 21, 2017 at 10:29 AM, Scott Dodsonwrote: > https://bugzilla.redhat.com/show_bug.cgi?id=1511870 is the bug, we > haven't fixed it yet. We're debating whether or not to switch to using the > dns name, though if environment variables evaluate as expected perhaps we > should just add NO_PROXY=${KUBENETES_SERVICE_HOST} and then we can > address whether or not to switch to dns later. > > On Tue, Nov 21, 2017 at 9:45 AM, Ben Parees wrote: > >> >> >> On Tue, Nov 21, 2017 at 1:46 AM, Joel Pearson < >> japear...@agiledigital.com.au> wrote: >> >>> Hi, >>> >>> I spend most of the day debugging why my OpenShift registry wasn’t >>> working because the cluster lives behind a http proxy. I can see OpenShift >>> ansible configured the registry with proxy settings including no_proxy, but >>> in the error logs I could see during authentication it was trying to talk >>> to the master api server at 172.30.0.1, but that wasn’t in the no_proxy env >>> setting so the proxy was trying to resolve it and failing. >>> >> >> I believe this is a known bug in the ansible installer. Hopefully Scott >> can point to the issue. >> >> >>> So that can be fixed by adding 172.30.0.1 to no_proxy, but it felt a bit >>> hacky. A dns name would be better as they’re easier to wildcard in >>> no_proxy. >>> >>> I want to know how the registry knows to use the IP address of the >>> master api server instead of a dns name? I couldn’t see a reference to the >>> api server in /etc/registry. Where does it get that from? Is it part of a >>> docket secret? >>> >> >> >> the kubernetes api IP is provided in an env var to the registry pod. >> KUBERNETES_SERVICE_HOST=172.30.0.1 >> >> >> >>> Thanks, >>> >>> Joel >>> >>> ___ >>> users mailing list >>> users@lists.openshift.redhat.com >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >>> >>> >> >> >> -- >> Ben Parees | OpenShift >> >> > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: Re-configure openshift cluster using ansible
It really depends on the configuration changes you want to make whether or not you can simply re-run config.yml and get what you're looking for. Things like hostnames that get placed in certs and certain network configuration such as services and cluster CIDR ranges are immutable and cannot be changed via the installer. As far as the health check goes, you should be able to disable any health check by setting the variable that's emitted in the error message. On Tue, Nov 21, 2017 at 11:25 AM, Alon Zusmanwrote: > Hello, > I could not figure out how I can change the inventory file for new > configurations and then Re-configure my current cluster. > > Whenever I re run the configure.yml in the byo folder, it checks again the > minimal requirements and my /var is already less than 40G after the > installation. > > Thanks. > > ___ > users mailing list > users@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re-configure openshift cluster using ansible
Hello, I could not figure out how I can change the inventory file for new configurations and then Re-configure my current cluster. Whenever I re run the configure.yml in the byo folder, it checks again the minimal requirements and my /var is already less than 40G after the installation. Thanks. ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: OpenShift registry behind registry auth issues
https://bugzilla.redhat.com/show_bug.cgi?id=1511870 is the bug, we haven't fixed it yet. We're debating whether or not to switch to using the dns name, though if environment variables evaluate as expected perhaps we should just add NO_PROXY=${KUBENETES_SERVICE_HOST} and then we can address whether or not to switch to dns later. On Tue, Nov 21, 2017 at 9:45 AM, Ben Pareeswrote: > > > On Tue, Nov 21, 2017 at 1:46 AM, Joel Pearson < > japear...@agiledigital.com.au> wrote: > >> Hi, >> >> I spend most of the day debugging why my OpenShift registry wasn’t >> working because the cluster lives behind a http proxy. I can see OpenShift >> ansible configured the registry with proxy settings including no_proxy, but >> in the error logs I could see during authentication it was trying to talk >> to the master api server at 172.30.0.1, but that wasn’t in the no_proxy env >> setting so the proxy was trying to resolve it and failing. >> > > I believe this is a known bug in the ansible installer. Hopefully Scott > can point to the issue. > > >> So that can be fixed by adding 172.30.0.1 to no_proxy, but it felt a bit >> hacky. A dns name would be better as they’re easier to wildcard in >> no_proxy. >> >> I want to know how the registry knows to use the IP address of the master >> api server instead of a dns name? I couldn’t see a reference to the api >> server in /etc/registry. Where does it get that from? Is it part of a >> docket secret? >> > > > the kubernetes api IP is provided in an env var to the registry pod. > KUBERNETES_SERVICE_HOST=172.30.0.1 > > > >> Thanks, >> >> Joel >> >> ___ >> users mailing list >> users@lists.openshift.redhat.com >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users >> >> > > > -- > Ben Parees | OpenShift > > ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
Re: OpenShift registry behind registry auth issues
On Tue, Nov 21, 2017 at 1:46 AM, Joel Pearsonwrote: > Hi, > > I spend most of the day debugging why my OpenShift registry wasn’t working > because the cluster lives behind a http proxy. I can see OpenShift ansible > configured the registry with proxy settings including no_proxy, but in the > error logs I could see during authentication it was trying to talk to the > master api server at 172.30.0.1, but that wasn’t in the no_proxy env > setting so the proxy was trying to resolve it and failing. > I believe this is a known bug in the ansible installer. Hopefully Scott can point to the issue. > So that can be fixed by adding 172.30.0.1 to no_proxy, but it felt a bit > hacky. A dns name would be better as they’re easier to wildcard in > no_proxy. > > I want to know how the registry knows to use the IP address of the master > api server instead of a dns name? I couldn’t see a reference to the api > server in /etc/registry. Where does it get that from? Is it part of a > docket secret? > the kubernetes api IP is provided in an env var to the registry pod. KUBERNETES_SERVICE_HOST=172.30.0.1 > Thanks, > > Joel > > ___ > users mailing list > users@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > -- Ben Parees | OpenShift ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
GlusterFS Autoprovisioning with Heketi - Permission Problem/Question
Hello, we are new to OpenShift and are "playing" with it in our lab. We have setup Openshift Origin 3.6 with a dedicated 3 node Glusterfs storage (CentOS based) with distributed/replicated. Due to the excellent documentation we have achieved to setup autoprovisioning according to this guide here: https://docs.openshift.org/latest/install_config/storage_examples/dedicated_gluster_dynamic_example.html It works and the storage is automatically provisioned and can be used in our pods! But for instance if we use the MariaDB (Persistent) Template we run into permission issues on the mounted glusterfs volume. Even a privileged pod does not help. The pod log: mkdir: cannot create directory '/var/lib/mysql/data/mysql': Permission denied Fatal error Can't create database directory '/var/lib/mysql/data/mysql' When I debug it in a terminal the permission problems occur as well: sh-4.2$ ps waux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mysql 1 0.0 0.0 4316 352 ? Ss 09:47 0:00 sleep 3600 sh-4.2$ ls -la /var/lib/mysql/data/ total 8 drwxr-xr-x. 4 root root 4096 Nov 17 16:23 . drwxrwxr-x. 3 mysql root 18 Oct 31 13:14 .. drwxr-xr-x. 3 root root 4096 Nov 17 16:23 .trashcan sh-4.2$ touch test touch: cannot touch 'test': Permission denied The Pod runs as mysql user and has no access rights to write here. When I set the permissions manually outside the pod on the mount it works, but that is not a good solution for autoprovisioning ... or run mariadb as root... not so good as well. We have not found any possibility to set the permissions with heketi or somewhere in the yaml. So what is would be the correct solution here? Thanks for your help! ___ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users