Re[4]: nginx in front of haproxy ?

2018-01-05 Thread Aleksandar Lazic

Hi Fabio.

-- Originalnachricht --
Von: "Fabio Martinelli" <fabio.martinelli.1...@gmail.com>
An: "Aleksandar Lazic" <al...@me2digital.eu>
Gesendet: 04.01.2018 10:34:03
Betreff: Re: Re[2]: nginx in front of haproxy ?


Thanks Joel,
that's correct, in this particular case it is not nginx in front of our 
3 haproxy but nginx in front of our 3 Web Console ; I got confused 
because in our nginx we have other rules pointing to the 3 haproxy, for 
instance to manage the 'metrics.hosting.wfp.org' case



Thanks Aleksandar,
my inventory sets :

openshift_master_default_subdomain=hosting.wfp.org
​openshift_master_cluster_public_hostname={{openshift_master_default_subdomain}}

maybe I had to be more explicit as you advice by directly setting :
openshift_master_cluster_public_hostname=hosting.wfp.org

I would do

`openshift_master_cluster_public_hostname=master.{{openshift_master_default_subdomain}}`

The ip for `master.hosting.wfp.org` should be a vip.

The domain alone is not enough you need a ip e. g.:  10.11.40.99, if you 
have not setuped a wild-card dns entry for this domain.


https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example#L304-L311

anyway I'm afraid to run Ansible again because of the 2 GlusterFS we 
run, 1 for general data, 1 for the internal registry ;
installing GlusterFS was the hardest part for us, maybe is there a way 
to skip the GlusterFS part without modifying the inventory file ?

Well I don't know.
How about to show us your inventory file with removed sensible data.


best regards,
Fabio


Best regards
Aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[2]: nginx in front of haproxy ?

2018-01-03 Thread Aleksandar Lazic

Hi.

@Fabio: When you use the advanced setup you should set the 
`openshift_master_cluster_public_hostname` to `hosting.wfp.org`and rerun 
the install playbook.


I suggest to take a look into the now very detailed documentation.

https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-cluster-variables
https://docs.openshift.org/latest/install_config/install/advanced_install.html#running-the-advanced-installation-system-container

It's big but worth to read.

-- Originalnachricht --
Von: "Joel Pearson" <japear...@agiledigital.com.au>
An: "Fabio Martinelli" <fabio.martinelli.1...@gmail.com>
Cc: users@lists.openshift.redhat.com
Gesendet: 03.01.2018 20:59:59
Betreff: Re: nginx in front of haproxy ?

It’s also worth mentioning that the console is not haproxy. That is the 
router, which run on the infrastructure nodes. The console/api server 
runs something else.
The 'something else' is the openshift master server or the api servers 
depend on the setup.


Regards
Aleks



On Wed, 3 Jan 2018 at 1:46 am, Fabio Martinelli 
<fabio.martinelli.1...@gmail.com> wrote:
It was actually needed to rewrite the master-config.yaml in this other 
way, basically removing all the :8443 strings in the 'public' fields, 
i.e. to make it implicitly appear as :443

[snipp]



the strange PHP error message was due to another service listening on 
the 8443 port on the same host where nginx it's running !





Exploiting this post https://github.com/openshift/origin/issues/17456 
our nginx setup got now :


upstream openshift-cluster-webconsole {
ip_hash;
server wfpromshap21.global.wfp.org:8443;
server wfpromshap22.global.wfp.org:8443;
server wfpromshap23.global.wfp.org:8443;
}

server {
listen   10.11.40.99:80;
server_name hosting.wfp.org;
return 301 https://$server_name$request_uri;
}


server {
listen   10.11.40.99:443;
server_name hosting.wfp.org;

access_log /var/log/nginx/hosting-console-access.log;
#access_log off;
error_log  /var/log/nginx/hosting-console-error.log  crit;

include /data/nginx/includes.d/ssl-wfp.conf;

include /data/nginx/includes.d/error.conf;

include /data/nginx/includes.d/proxy.conf;

proxy_set_header Host $host;

location / {
proxy_pass https://openshift-cluster-webconsole;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

}

​and it seems to work by nicely masking the 3 Web Consoles.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: nginx in front of haproxy ?

2018-01-03 Thread Joel Pearson
It’s also worth mentioning that the console is not haproxy. That is the
router, which run on the infrastructure nodes. The console/api server runs
something else.
On Wed, 3 Jan 2018 at 1:46 am, Fabio Martinelli <
fabio.martinelli.1...@gmail.com> wrote:

> It was actually needed to rewrite the master-config.yaml in this other
> way, basically removing all the :8443 strings in the 'public' fields, i.e.
> to make it implicitly appear as :443
>
> admissionConfig:
>   pluginConfig:
> BuildDefaults:
>   configuration:
> apiVersion: v1
> env: []
> kind: BuildDefaultsConfig
> resources:
>   limits: {}
>   requests: {}
> BuildOverrides:
>   configuration:
> apiVersion: v1
> kind: BuildOverridesConfig
> PodPreset:
>   configuration:
> apiVersion: v1
> disable: false
> kind: DefaultAdmissionConfig
> openshift.io/ImagePolicy:
>   configuration:
> apiVersion: v1
> executionRules:
> - matchImageAnnotations:
>   - key: images.openshift.io/deny-execution
> value: 'true'
>   name: execution-denied
>   onResources:
>   - resource: pods
>   - resource: builds
>   reject: true
>   skipOnResolutionFailure: true
> kind: ImagePolicyConfig
> aggregatorConfig:
>   proxyClientInfo:
> certFile: aggregator-front-proxy.crt
> keyFile: aggregator-front-proxy.key
> apiLevels:
> - v1
> apiVersion: v1
> assetConfig:
>   extensionScripts:
>   - /etc/origin/master/openshift-ansible-catalog-console.js
>   logoutURL: ""
>   masterPublicURL: https://hosting.wfp.org<
>   metricsPublicURL: https://metrics.hosting.wfp.org/hawkular/metrics
>   publicURL: https://hosting.wfp.org/console/<
>   servingInfo:
> bindAddress: 0.0.0.0:8443
> bindNetwork: tcp4
> certFile: master.server.crt
> clientCA: ""
> keyFile: master.server.key
> maxRequestsInFlight: 0
> requestTimeoutSeconds: 0
> authConfig:
>   requestHeader:
> clientCA: front-proxy-ca.crt
> clientCommonNames:
> - aggregator-front-proxy
> extraHeaderPrefixes:
> - X-Remote-Extra-
> groupHeaders:
> - X-Remote-Group
> usernameHeaders:
> - X-Remote-User
> controllerConfig:
>   election:
> lockName: openshift-master-controllers
>   serviceServingCert:
> signer:
>   certFile: service-signer.crt
>   keyFile: service-signer.key
> controllers: '*'
> corsAllowedOrigins:
> - (?i)//127\.0\.0\.1(:|\z)
> - (?i)//localhost(:|\z)
> - (?i)//10\.11\.41\.85(:|\z)
> - (?i)//kubernetes\.default(:|\z)
> - (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
> - (?i)//kubernetes(:|\z)
> - (?i)//openshift\.default(:|\z)
> - (?i)//hosting\.wfp\.org(:|\z)
> - (?i)//openshift\.default\.svc(:|\z)
> - (?i)//172\.30\.0\.1(:|\z)
> - (?i)//wfpromshap21\.global\.wfp\.org(:|\z)
> - (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
> - (?i)//kubernetes\.default\.svc(:|\z)
> - (?i)//openshift(:|\z)
> dnsConfig:
>   bindAddress: 0.0.0.0:8053
>   bindNetwork: tcp4
> etcdClientInfo:
>   ca: master.etcd-ca.crt
>   certFile: master.etcd-client.crt
>   keyFile: master.etcd-client.key
>   urls:
>   - https://wfpromshap21.global.wfp.org:2379
>   - https://wfpromshap22.global.wfp.org:2379
>   - https://wfpromshap23.global.wfp.org:2379
> etcdStorageConfig:
>   kubernetesStoragePrefix: kubernetes.io
>   kubernetesStorageVersion: v1
>   openShiftStoragePrefix: openshift.io
>   openShiftStorageVersion: v1
> imageConfig:
>   format: openshift/origin-${component}:${version}
>   latest: false
> kind: MasterConfig
> kubeletClientInfo:
>   ca: ca-bundle.crt
>   certFile: master.kubelet-client.crt
>   keyFile: master.kubelet-client.key
>   port: 10250
> kubernetesMasterConfig:
>   apiServerArguments:
> runtime-config:
> - apis/settings.k8s.io/v1alpha1=true
> storage-backend:
> - etcd3
> storage-media-type:
> - application/vnd.kubernetes.protobuf
>   controllerArguments:
>   masterCount: 3
>   masterIP: 10.11.41.85
>   podEvictionTimeout:
>   proxyClientInfo:
> certFile: master.proxy-client.crt
> keyFile: master.proxy-client.key
>   schedulerArguments:
>   schedulerConfigFile: /etc/origin/master/scheduler.json
>   servicesNodePortRange: ""
>   servicesSubnet: 172.30.0.0/16
>   staticNodeNames: []
> masterClients:
>   externalKubernetesClientConnectionOverrides:
> acceptContentTypes:
> application/vnd.kubernetes.protobuf,application/json
> burst: 400
> contentType: application/vnd.kubernetes.protobuf
> qps: 200
>   externalKubernetesKubeConfig: ""
>   openshiftLoopbackClientConnectionOverrides:
> acceptContentTypes:
> application/vnd.kubernetes.protobuf,application/json
> burst: 600
> contentType: application/vnd.kubernetes.protobuf
> qps: 300
>   openshiftLoopbackKubeConfig: openshift-master.kubeconfig
> masterPublicURL: https://hosting.wfp.org<
> networkConfig:
> 

Re: nginx in front of haproxy ?

2018-01-02 Thread Fabio Martinelli
It was actually needed to rewrite the master-config.yaml in this other way,
basically removing all the :8443 strings in the 'public' fields, i.e. to
make it implicitly appear as :443

admissionConfig:
  pluginConfig:
BuildDefaults:
  configuration:
apiVersion: v1
env: []
kind: BuildDefaultsConfig
resources:
  limits: {}
  requests: {}
BuildOverrides:
  configuration:
apiVersion: v1
kind: BuildOverridesConfig
PodPreset:
  configuration:
apiVersion: v1
disable: false
kind: DefaultAdmissionConfig
openshift.io/ImagePolicy:
  configuration:
apiVersion: v1
executionRules:
- matchImageAnnotations:
  - key: images.openshift.io/deny-execution
value: 'true'
  name: execution-denied
  onResources:
  - resource: pods
  - resource: builds
  reject: true
  skipOnResolutionFailure: true
kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
certFile: aggregator-front-proxy.crt
keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
assetConfig:
  extensionScripts:
  - /etc/origin/master/openshift-ansible-catalog-console.js
  logoutURL: ""
  masterPublicURL: https://hosting.wfp.org<
  metricsPublicURL: https://metrics.hosting.wfp.org/hawkular/metrics
  publicURL: https://hosting.wfp.org/console/<
  servingInfo:
bindAddress: 0.0.0.0:8443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 0
authConfig:
  requestHeader:
clientCA: front-proxy-ca.crt
clientCommonNames:
- aggregator-front-proxy
extraHeaderPrefixes:
- X-Remote-Extra-
groupHeaders:
- X-Remote-Group
usernameHeaders:
- X-Remote-User
controllerConfig:
  election:
lockName: openshift-master-controllers
  serviceServingCert:
signer:
  certFile: service-signer.crt
  keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//10\.11\.41\.85(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//hosting\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//172\.30\.0\.1(:|\z)
- (?i)//wfpromshap21\.global\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://wfpromshap21.global.wfp.org:2379
  - https://wfpromshap22.global.wfp.org:2379
  - https://wfpromshap23.global.wfp.org:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: false
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
runtime-config:
- apis/settings.k8s.io/v1alpha1=true
storage-backend:
- etcd3
storage-media-type:
- application/vnd.kubernetes.protobuf
  controllerArguments:
  masterCount: 3
  masterIP: 10.11.41.85
  podEvictionTimeout:
  proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
  schedulerArguments:
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ""
  servicesSubnet: 172.30.0.0/16
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 400
contentType: application/vnd.kubernetes.protobuf
qps: 200
  externalKubernetesKubeConfig: ""
  openshiftLoopbackClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 600
contentType: application/vnd.kubernetes.protobuf
qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://hosting.wfp.org<
networkConfig:
  clusterNetworkCIDR: 10.128.0.0/14
  clusterNetworks:
  - cidr: 10.128.0.0/14
hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  hostSubnetLength: 9
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
  assetPublicURL: https://hosting.wfp.org/console/
  grantConfig:
method: auto
  identityProviders:
  - challenge: true
login: true
mappingMethod: claim
name: htpasswd_auth
provider:
  apiVersion: v1
  file: /etc/origin/master/htpasswd
  kind: 

nginx in front of haproxy ?

2018-01-02 Thread Fabio Martinelli
Hello

as our load balancer I've to setup nginx 1.13.8 configured in HA on 2 nodes
by Keepalived in front of our 3 masters Origin 3.7 containerized
installation ;

seemingly on the 3 masters the master-config.yaml got configured fine by
the Ansible run :

admissionConfig:
  pluginConfig:
BuildDefaults:
  configuration:
apiVersion: v1
env: []
kind: BuildDefaultsConfig
resources:
  limits: {}
  requests: {}
BuildOverrides:
  configuration:
apiVersion: v1
kind: BuildOverridesConfig
PodPreset:
  configuration:
apiVersion: v1
disable: false
kind: DefaultAdmissionConfig
openshift.io/ImagePolicy:
  configuration:
apiVersion: v1
executionRules:
- matchImageAnnotations:
  - key: images.openshift.io/deny-execution
value: 'true'
  name: execution-denied
  onResources:
  - resource: pods
  - resource: builds
  reject: true
  skipOnResolutionFailure: true
kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
certFile: aggregator-front-proxy.crt
keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
assetConfig:
  extensionScripts:
  - /etc/origin/master/openshift-ansible-catalog-console.js
  logoutURL: ""
  masterPublicURL: https://hosting.wfp.org:8443<
  metricsPublicURL: https://metrics.hosting.wfp.org/hawkular/metrics
  publicURL: https://hosting.wfp.org:8443/console/ <
  servingInfo:
bindAddress: 0.0.0.0:8443
bindNetwork: tcp4
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 0
authConfig:
  requestHeader:
clientCA: front-proxy-ca.crt
clientCommonNames:
- aggregator-front-proxy
extraHeaderPrefixes:
- X-Remote-Extra-
groupHeaders:
- X-Remote-Group
usernameHeaders:
- X-Remote-User
controllerConfig:
  election:
lockName: openshift-master-controllers
  serviceServingCert:
signer:
  certFile: service-signer.crt
  keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//10\.11\.41\.85(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//hosting\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//172\.30\.0\.1(:|\z)
- (?i)//wfpromshap21\.global\.wfp\.org(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://wfpromshap21.global.wfp.org:2379
  - https://wfpromshap22.global.wfp.org:2379
  - https://wfpromshap23.global.wfp.org:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: false
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
runtime-config:
- apis/settings.k8s.io/v1alpha1=true
storage-backend:
- etcd3
storage-media-type:
- application/vnd.kubernetes.protobuf
  controllerArguments:
  masterCount: 3
  masterIP: 10.11.41.85
  podEvictionTimeout:
  proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
  schedulerArguments:
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ""
  servicesSubnet: 172.30.0.0/16
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 400
contentType: application/vnd.kubernetes.protobuf
qps: 200
  externalKubernetesKubeConfig: ""
  openshiftLoopbackClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 600
contentType: application/vnd.kubernetes.protobuf
qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://hosting.wfp.org:8443<
networkConfig:
  clusterNetworkCIDR: 10.128.0.0/14
  clusterNetworks:
  - cidr: 10.128.0.0/14
hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  hostSubnetLength: 9
  networkPluginName: redhat/openshift-ovs-multitenant
  serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
  assetPublicURL: https://hosting.wfp.org:8443/console/<
  grantConfig:
method: auto
  identityProviders:
  - challenge: true
login: true
mappingMethod: claim
name: htpasswd_auth