Custom certificate and the host associated with masterPublicURL
Hi, I'm trying to understand from a technical point of view the hard requirement around namedCertificates and the hostname associated with the masterPublicURL vs masterURL. According to the docs [1] it says " The namedCertificates section should be configured only for the host name associated with the masterPublicURLand oauthConfig.assetPublicURL settings n the */etc/origin/master/master-config.yaml* file. Using a custom serving certificate for the host name associated with the masterURL will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL host. " However the above note/ requirement doesn't applies to the self-signed certificated generated by the openshift-ansible installer and as such the OP can have the same value defined to the below variables in his/her inventory openshift_master_cluster_public_hostname => map to *masterPublicURL* openshift_master_cluster_hostname => map to *masterURL* without having any side effect - ie TLS errors. Is there anything "special" around the self-signed certificates produced by the openshift-ansible installer which doesn't generate any TLS errors ? If not then i'd expect same TLS errors as for when the namedCertificates section is present. Dani [1] https://docs.openshift.com/container-platform/3.10/install_config/certificate_customization.html#configuring-custom-certificates ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
Re: install cluster logging with a local volume
Hi Ali, this is indeed a possible way to do it, use emptyDir first and then reconfigure the right volume; Le mer. 29 août 2018 à 14:46, Ali Akil a écrit : > Hi everyone, > > i don't really understand why the [documentation]( > https://docs.okd.io/3.10/install/configuring_inventory_file.html#advanced-install-cluster-logging) > states only 3 options to configure persistent storage for cluster logging > (Option A: Dynamic with glusterfs,Option B: NFS Host Group,Option C: > External NFS Host) and an another part of the [documentation]( > https://docs.okd.io/3.10/install_config/aggregate_logging.html#aggregated-elasticsearch) > states > >Using NFS storage as a volume or a persistent volume (or via NAS such as > Gluster) is not supported for Elasticsearch storage > > The only way remains for me is using a local volume (hostpath) since i > need to preserve the ES-data but the documentation does not state how to > configure this option in the inventory file. Should i use emptydir volumes > then reconfigure the logging after the installation ? > > I want to deploy openshift-origin 3.9 on centos 7.5 with one ES pod. > ___ > dev mailing list > dev@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev > ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
install cluster logging with a local volume
Hi everyone, i don't really understand why the [documentation](https://docs.okd.io/3.10/install/configuring_inventory_file.html#advanced-install-cluster-logging) states only 3 options to configure persistent storage for cluster logging (Option A: Dynamic with glusterfs,Option B: NFS Host Group,Option C: External NFS Host) and an another part of the [documentation](https://docs.okd.io/3.10/install_config/aggregate_logging.html#aggregated-elasticsearch) states >Using NFS storage as a volume or a persistent volume (or via NAS such as >Gluster) is not supported for Elasticsearch storage The only way remains for me is using a local volume (hostpath) since i need to preserve the ES-data but the documentation does not state how to configure this option in the inventory file. Should i use emptydir volumes then reconfigure the logging after the installation ? I want to deploy openshift-origin 3.9 on centos 7.5 with one ES pod. ___ dev mailing list dev@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/dev