Re[2]: nginx in front of haproxy ?

2018-01-03 Thread Aleksandar Lazic

Hi.

@Fabio: When you use the advanced setup you should set the 
`openshift_master_cluster_public_hostname` to `hosting.wfp.org`and rerun 
the install playbook.


I suggest to take a look into the now very detailed documentation.

https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-cluster-variables
https://docs.openshift.org/latest/install_config/install/advanced_install.html#running-the-advanced-installation-system-container

It's big but worth to read.

-- Originalnachricht --
Von: "Joel Pearson" 
An: "Fabio Martinelli" 
Cc: users@lists.openshift.redhat.com
Gesendet: 03.01.2018 20:59:59
Betreff: Re: nginx in front of haproxy ?

It’s also worth mentioning that the console is not haproxy. That is the 
router, which run on the infrastructure nodes. The console/api server 
runs something else.
The 'something else' is the openshift master server or the api servers 
depend on the setup.


Regards
Aleks



On Wed, 3 Jan 2018 at 1:46 am, Fabio Martinelli 
 wrote:
It was actually needed to rewrite the master-config.yaml in this other 
way, basically removing all the :8443 strings in the 'public' fields, 
i.e. to make it implicitly appear as :443

[snipp]



the strange PHP error message was due to another service listening on 
the 8443 port on the same host where nginx it's running !





Exploiting this post https://github.com/openshift/origin/issues/17456 
our nginx setup got now :


upstream openshift-cluster-webconsole {
ip_hash;
server wfpromshap21.global.wfp.org:8443;
server wfpromshap22.global.wfp.org:8443;
server wfpromshap23.global.wfp.org:8443;
}

server {
listen   10.11.40.99:80;
server_name hosting.wfp.org;
return 301 https://$server_name$request_uri;
}


server {
listen   10.11.40.99:443;
server_name hosting.wfp.org;

access_log /var/log/nginx/hosting-console-access.log;
#access_log off;
error_log  /var/log/nginx/hosting-console-error.log  crit;

include /data/nginx/includes.d/ssl-wfp.conf;

include /data/nginx/includes.d/error.conf;

include /data/nginx/includes.d/proxy.conf;

proxy_set_header Host $host;

location / {
proxy_pass https://openshift-cluster-webconsole;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

}

​and it seems to work by nicely masking the 3 Web Consoles.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: nginx in front of haproxy ?

2018-01-03 Thread Joel Pearson
It’s also worth mentioning that the console is not haproxy. That is the
router, which run on the infrastructure nodes. The console/api server runs
something else.
On Wed, 3 Jan 2018 at 1:46 am, Fabio Martinelli <
fabio.martinelli.1...@gmail.com> wrote:

> It was actually needed to rewrite the master-config.yaml in this other
> way, basically removing all the :8443 strings in the 'public' fields, i.e.
> to make it implicitly appear as :443
>
> admissionConfig:
>   pluginConfig:
> BuildDefaults:
>   configuration:
> apiVersion: v1
> env: []
> kind: BuildDefaultsConfig
> resources:
>   limits: {}
>   requests: {}
> BuildOverrides:
>   configuration:
> apiVersion: v1
> kind: BuildOverridesConfig
> PodPreset:
>   configuration:
> apiVersion: v1
> disable: false
> kind: DefaultAdmissionConfig
> openshift.io/ImagePolicy:
>   configuration:
> apiVersion: v1
> executionRules:
> - matchImageAnnotations:
>   - key: images.openshift.io/deny-execution
> value: 'true'
>   name: execution-denied
>   onResources:
>   - resource: pods
>   - resource: builds
>   reject: true
>   skipOnResolutionFailure: true
> kind: ImagePolicyConfig
> aggregatorConfig:
>   proxyClientInfo:
> certFile: aggregator-front-proxy.crt
> keyFile: aggregator-front-proxy.key
> apiLevels:
> - v1
> apiVersion: v1
> assetConfig:
>   extensionScripts:
>   - /etc/origin/master/openshift-ansible-catalog-console.js
>   logoutURL: ""
>   masterPublicURL: https://hosting.wfp.org<
>   metricsPublicURL: https://metrics.hosting.wfp.org/hawkular/metrics
>   publicURL: https://hosting.wfp.org/console/<
>   servingInfo:
> bindAddress: 0.0.0.0:8443
> bindNetwork: tcp4
> certFile: master.server.crt
> clientCA: ""
> keyFile: master.server.key
> maxRequestsInFlight: 0
> requestTimeoutSeconds: 0
> authConfig:
>   requestHeader:
> clientCA: front-proxy-ca.crt
> clientCommonNames:
> - aggregator-front-proxy
> extraHeaderPrefixes:
> - X-Remote-Extra-
> groupHeaders:
> - X-Remote-Group
> usernameHeaders:
> - X-Remote-User
> controllerConfig:
>   election:
> lockName: openshift-master-controllers
>   serviceServingCert:
> signer:
>   certFile: service-signer.crt
>   keyFile: service-signer.key
> controllers: '*'
> corsAllowedOrigins:
> - (?i)//127\.0\.0\.1(:|\z)
> - (?i)//localhost(:|\z)
> - (?i)//10\.11\.41\.85(:|\z)
> - (?i)//kubernetes\.default(:|\z)
> - (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
> - (?i)//kubernetes(:|\z)
> - (?i)//openshift\.default(:|\z)
> - (?i)//hosting\.wfp\.org(:|\z)
> - (?i)//openshift\.default\.svc(:|\z)
> - (?i)//172\.30\.0\.1(:|\z)
> - (?i)//wfpromshap21\.global\.wfp\.org(:|\z)
> - (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
> - (?i)//kubernetes\.default\.svc(:|\z)
> - (?i)//openshift(:|\z)
> dnsConfig:
>   bindAddress: 0.0.0.0:8053
>   bindNetwork: tcp4
> etcdClientInfo:
>   ca: master.etcd-ca.crt
>   certFile: master.etcd-client.crt
>   keyFile: master.etcd-client.key
>   urls:
>   - https://wfpromshap21.global.wfp.org:2379
>   - https://wfpromshap22.global.wfp.org:2379
>   - https://wfpromshap23.global.wfp.org:2379
> etcdStorageConfig:
>   kubernetesStoragePrefix: kubernetes.io
>   kubernetesStorageVersion: v1
>   openShiftStoragePrefix: openshift.io
>   openShiftStorageVersion: v1
> imageConfig:
>   format: openshift/origin-${component}:${version}
>   latest: false
> kind: MasterConfig
> kubeletClientInfo:
>   ca: ca-bundle.crt
>   certFile: master.kubelet-client.crt
>   keyFile: master.kubelet-client.key
>   port: 10250
> kubernetesMasterConfig:
>   apiServerArguments:
> runtime-config:
> - apis/settings.k8s.io/v1alpha1=true
> storage-backend:
> - etcd3
> storage-media-type:
> - application/vnd.kubernetes.protobuf
>   controllerArguments:
>   masterCount: 3
>   masterIP: 10.11.41.85
>   podEvictionTimeout:
>   proxyClientInfo:
> certFile: master.proxy-client.crt
> keyFile: master.proxy-client.key
>   schedulerArguments:
>   schedulerConfigFile: /etc/origin/master/scheduler.json
>   servicesNodePortRange: ""
>   servicesSubnet: 172.30.0.0/16
>   staticNodeNames: []
> masterClients:
>   externalKubernetesClientConnectionOverrides:
> acceptContentTypes:
> application/vnd.kubernetes.protobuf,application/json
> burst: 400
> contentType: application/vnd.kubernetes.protobuf
> qps: 200
>   externalKubernetesKubeConfig: ""
>   openshiftLoopbackClientConnectionOverrides:
> acceptContentTypes:
> application/vnd.kubernetes.protobuf,application/json
> burst: 600
> contentType: application/vnd.kubernetes.protobuf
> qps: 300
>   openshiftLoopbackKubeConfig: openshift-master.kubeconfig
> masterPublicURL: https://hosting.wfp.org<
> networkConfig:
> 

Re: openvswitch?

2018-01-03 Thread Tim Dudgeon
Looks like this problem has fixed itself over the last couple of weeks 
(I just updated openshift-ansible on the release-3.7 branch) .

That package dependency error is no longer happening.
It now seems possible to deploy a minimal 3.7 distribution using the 
Ansible installer.

I have no idea what the source of the problem was or what has changed.


On 22/12/17 10:09, Tim Dudgeon wrote:


I tried disabling the package checks but this just pushes the failure 
down the line:


  1. Hosts:    host-10-0-0-10, host-10-0-0-12, host-10-0-0-13, 
host-10-0-0-6, host-10-0-0-9

 Play: Configure nodes
 Task: Install sdn-ovs package
 Message:  Error: Package: origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 
(centos-openshift-origin37)

  Requires: openvswitch >= 2.6.1

Something seems broken with the package dependencies?

This happens when trying to install v3.7 using openshift-ansible from 
branch release-3.7.

openshift_deployment_type=origin
openshift_release=v3.7


On 21/12/17 16:48, Tim Dudgeon wrote:


Yes, but is this error a result of broken dependencies in the RPMs?
There's no mention of needing to instal openvswitch as part of the 
pre-requisites mentioned here:
https://docs.openshift.org/latest/install_config/install/host_preparation.html 




On 20/12/17 20:27, Joel Pearson wrote:
It’s in the paas repo 
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
On Thu, 21 Dec 2017 at 1:09 am, Tim Dudgeon > wrote:


I just starting hitting this error when using the ansible installer
(installing v3.70 from openshift-ansible on branch release-3.7).

1. Hosts:    host-10-0-0-10, host-10-0-0-13, host-10-0-0-7,
host-10-0-0-8, host-10-0-0-9
  Play: OpenShift Health Checks
  Task: Run health checks (install) - EL
  Message:  One or more checks failed
  Details:  check "package_availability":
    Could not perform a yum update.
    Errors from dependency resolution:
  origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires
openvswitch >= 2.6.1
    You should resolve these issues before
proceeding with
an install.
    You may need to remove or downgrade packages or
enable/disable yum repositories.

    check "package_version":
    Not all of the required packages are available
at their
requested version
    openvswitch:['2.6', '2.7', '2.8']
    Please check your subscriptions and enabled
repositories.

This was not happening before. Where does openvswitch come from?
Can't
find it in the standard rpm repos.

Tim

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users







___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Help using ImageStreams, DCs and ImagePullSecrets templates with a GitLab private registry (v3.6)

2018-01-03 Thread Maciej Szulik
Have a look at [1] which should explain how to connect the IS with the
secret. Additionally,
there's [2] which explains problems when auth is delegated to a different
uri.

Maciej


[1]
https://docs.openshift.org/latest/dev_guide/managing_images.html#private-registries
[2] https://github.com/openshift/origin/issues/9584

On Wed, Jan 3, 2018 at 10:34 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Hi all,
>
> I’m successfully using a DeploymentConfig (DC) and an ImagePullSecret
> (IPS) templates with OpenShift Origin v3.6 to spin-up my application from a
> container image hosted on a private GitLab registry. But I want the
> deployment to re-deploy when the GitLab image changes and to do this I
> believe I need to employ an ImageStream.
>
> I’m, comfortable with each of these objects and have successfully used
> ImageStreams and DCs with public DockerHub images (that was easy because
> there are so many examples). But I’m stuck trying to pull an image using an
> ImageStream from a private GitLab-hosted docker registry.
>
> The IPS seems to belong to the DC, so how do I get my ImageStream to use
> it? My initial attempts have not been successful. All I get, after a number
> of attempts at this, is the following error on the ImageScreen console...
>
> Internal error occurred: Get https://registry.gitlab.com/
> v2/myproject/myimage/manifests/latest: denied: access forbidden.
> Timestamp: 2017-12-28T14:27:12Z Error count: 2.
>
> Where “myproject” and “myimage” are my GitLab project and image names.
>
> My working DC/IPS combo looks something like this…
>
> […]
> imagePullSecrets:
> - name: gitlab-myproject
> containers:
>   - image: registry.gitlab.com/myproject/myimage:stable
> name: myimage
> […]
>
> But what would my DC/IPS/ImageStream objects look like?
>
> Thanks in advance.
>
> Alan Christie.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Help using ImageStreams, DCs and ImagePullSecrets templates with a GitLab private registry (v3.6)

2018-01-03 Thread Alan Christie
Hi all,

I’m successfully using a DeploymentConfig (DC) and an ImagePullSecret (IPS) 
templates with OpenShift Origin v3.6 to spin-up my application from a container 
image hosted on a private GitLab registry. But I want the deployment to 
re-deploy when the GitLab image changes and to do this I believe I need to 
employ an ImageStream.

I’m, comfortable with each of these objects and have successfully used 
ImageStreams and DCs with public DockerHub images (that was easy because there 
are so many examples). But I’m stuck trying to pull an image using an 
ImageStream from a private GitLab-hosted docker registry.

The IPS seems to belong to the DC, so how do I get my ImageStream to use it? My 
initial attempts have not been successful. All I get, after a number of 
attempts at this, is the following error on the ImageScreen console...

Internal error occurred: Get 
https://registry.gitlab.com/v2/myproject/myimage/manifests/latest: denied: 
access forbidden. Timestamp: 2017-12-28T14:27:12Z Error count: 2.

Where “myproject” and “myimage” are my GitLab project and image names.

My working DC/IPS combo looks something like this…

[…]
imagePullSecrets:
- name: gitlab-myproject
containers:
  - image: registry.gitlab.com/myproject/myimage:stable
name: myimage
[…]

But what would my DC/IPS/ImageStream objects look like?

Thanks in advance.

Alan Christie.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users