OpenShift Origin on AWS

2018-10-01 Thread Peter Heitman
I've created a CloudFormation Stack for simple lab-test deployments of
OpenShift Origin on AWS. Now I'd like to understand what would be best for
production deployments of OpenShift Origin on AWS. In particular I'd like
to create the corresponding CloudFormation Stack.

I've seen the Install Guide page on Configuring for AWS and I've looked
through the RedHat QuickStart Guide for OpenShift Enterprise but am still
missing information. For example, the RedHat QuickStart Guide creates 3
masters, 3 etcd servers and some number of compute nodes. Where are the
routers (infra nodes) located? On the masters or on the etcd servers? How
are the ELBs configured to work with those deployed routers? What if some
of the traffic you are routing is not http/https? What is required to
support that?

I've seen the simple CloudFormation stack (
https://sysdig.com/blog/deploy-openshift-aws/) but haven't found anything
comparable for something that is closer to production ready (and likely
takes advantage of using the AWS VPC QuickStart (
https://aws.amazon.com/quickstart/architecture/vpc/).

Does anyone have any prior work that they could share or point me to?

Thanks in advance,

Peter Heitman
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 3.10 openshift-ansible install is failing - cni not configured

2018-09-12 Thread Peter Heitman
Regarding how I've made progress, I forgot to mention that I also had to set

oreg_url=/openshift/origin-${component}:${version}

otherwise the ansible script still tried to pull
openshift/origin-node:v3.10 from docker.io instead of from my local registry


On Wed, Sep 12, 2018 at 3:47 PM Alexander Bartilla <
alexander.barti...@cloudwerkstatt.com> wrote:

> Hi,
>
> This seems to be a general problem with the missing image. There is an
> open issue on github.
>
> https://github.com/openshift/origin/issues/20676
>
> Sounds really strange, especially due to the fact that this issue was
> opened around 27 days ago...
>
> I‘m going to try to replicate the issues as soon as I got some spare time
> on my hands.
>
> Is there anyone who experienced similar issues with the 3.10 install?
>
>
> On Wed 12. Sep 2018 at 21:37, Peter Heitman  wrote:
>
>> Thanks to Alexander, I found out that a major part of my problem is that
>> my nodes have a poor internet connection and pulling images from
>> docker.io is either slow or docker.io reports that the manifest is not
>> found. Pulling the images locally, pushing them to a local registry and
>> changing system_images_registry to my local registry helped a lot.
>>
>> However, it seems to consistently fail the first time I run
>> deploy_cluster.yml (the control plane pods do not come up completely - they
>> come up, become ready and then are deleted and started over again in a
>> cycle every 5 seconds or so). If I run deploy_cluster.yml again (without
>> changing anything) the deploy seems to go better the second time.
>>
>> I am unable to enable metrics. First, the ansible installer seems to want
>> to get the metrics images with the tag v3.10.0 which doesn't exist. I tried
>> pulling them down, labeling latest as v3.10.0 and pushing them to my local
>> registry, but the image for openshift/origin-metrics-schema-installer
>> doesn't seem to exist with any label.
>>
>> Anyway, thanks again Alexander - this is significant progress even though
>> I'm definitely not ready to move off of 3.9.0 yet
>>
>> On Tue, Sep 11, 2018 at 1:42 PM Peter Heitman  wrote:
>>
>>> Thanks for the reply. I was pinning the release only because I was
>>> updating a working inventory from 3.9 and forgot that I had pinned that
>>> release to avoid upgrading to 3.10. I've updated the inventory to set
>>> openshift_release="3.10" and commented out openshift_image_tag and
>>> openshift_pkg_version so that the ansible scripts will derive the correct
>>> values. I have re-run the installer using a fresh version of the master and
>>> minion VMs (CentOS 7.5 with docker installed). I get the same error. The
>>> output of systemctl status origin-node on the master is:
>>>
>>> ● origin-node.service - OpenShift Node
>>>Loaded: loaded (/etc/systemd/system/origin-node.service; enabled;
>>> vendor preset: disabled)
>>>Active: active (running) since Tue 2018-09-11 10:31:51 PDT; 3min 29s
>>> ago
>>>  Docs: https://github.com/openshift/origin
>>>  Main PID: 21183 (hyperkube)
>>>CGroup: /system.slice/origin-node.service
>>>└─21183 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0
>>> --allow-privileged=true --anonymous-auth=true
>>> --authentication-token-webhook=true
>>> --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook
>>> --authorization-webhook-cache-authorized-ttl=5m
>>> --authorization-webhook-cache-unauthorized-ttl=5m
>>> --bootstrap-kubeconfig=/etc/origin/node/bootstrap.kubeconfig
>>> --cadvisor-port=0 --cert-dir=/etc/origin/node/certificates
>>> --cgroup-driver=systemd --client-ca-file=/etc/origin/node/client-ca.crt
>>> --cluster-dns=10.93.233.126 --cluster-domain=cluster.local
>>> --container-runtime-endpoint=/var/run/dockershim.sock --containerized=false
>>> --enable-controller-attach-detach=true
>>> --experimental-dockershim-root-directory=/var/lib/dockershim
>>> --fail-swap-on=false
>>> --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
>>> --file-check-frequency=0s --healthz-bind-address= --healthz-port=0
>>> --host-ipc-sources=api --host-ipc-sources=file --host-network-sources=api
>>> --host-network-sources=file --host-pid-sources=api --host-pid-sources=file
>>> --hostname-override= --http-check-frequency=0s
>>> --image-service-endpoint=/var/run/dockershim.sock
>>> --iptables-masquerade-bit=0 --kubeconfig=/etc/origin/node/node.kubeconfig
>>> --max-pods=250 --network-plu

Re: 3.10 openshift-ansible install is failing - cni not configured

2018-09-12 Thread Peter Heitman
Thanks to Alexander, I found out that a major part of my problem is that my
nodes have a poor internet connection and pulling images from docker.io is
either slow or docker.io reports that the manifest is not found. Pulling
the images locally, pushing them to a local registry and changing
system_images_registry to my local registry helped a lot.

However, it seems to consistently fail the first time I run
deploy_cluster.yml (the control plane pods do not come up completely - they
come up, become ready and then are deleted and started over again in a
cycle every 5 seconds or so). If I run deploy_cluster.yml again (without
changing anything) the deploy seems to go better the second time.

I am unable to enable metrics. First, the ansible installer seems to want
to get the metrics images with the tag v3.10.0 which doesn't exist. I tried
pulling them down, labeling latest as v3.10.0 and pushing them to my local
registry, but the image for openshift/origin-metrics-schema-installer
doesn't seem to exist with any label.

Anyway, thanks again Alexander - this is significant progress even though
I'm definitely not ready to move off of 3.9.0 yet

On Tue, Sep 11, 2018 at 1:42 PM Peter Heitman  wrote:

> Thanks for the reply. I was pinning the release only because I was
> updating a working inventory from 3.9 and forgot that I had pinned that
> release to avoid upgrading to 3.10. I've updated the inventory to set
> openshift_release="3.10" and commented out openshift_image_tag and
> openshift_pkg_version so that the ansible scripts will derive the correct
> values. I have re-run the installer using a fresh version of the master and
> minion VMs (CentOS 7.5 with docker installed). I get the same error. The
> output of systemctl status origin-node on the master is:
>
> ● origin-node.service - OpenShift Node
>Loaded: loaded (/etc/systemd/system/origin-node.service; enabled;
> vendor preset: disabled)
>Active: active (running) since Tue 2018-09-11 10:31:51 PDT; 3min 29s ago
>  Docs: https://github.com/openshift/origin
>  Main PID: 21183 (hyperkube)
>CGroup: /system.slice/origin-node.service
>└─21183 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0
> --allow-privileged=true --anonymous-auth=true
> --authentication-token-webhook=true
> --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook
> --authorization-webhook-cache-authorized-ttl=5m
> --authorization-webhook-cache-unauthorized-ttl=5m
> --bootstrap-kubeconfig=/etc/origin/node/bootstrap.kubeconfig
> --cadvisor-port=0 --cert-dir=/etc/origin/node/certificates
> --cgroup-driver=systemd --client-ca-file=/etc/origin/node/client-ca.crt
> --cluster-dns=10.93.233.126 --cluster-domain=cluster.local
> --container-runtime-endpoint=/var/run/dockershim.sock --containerized=false
> --enable-controller-attach-detach=true
> --experimental-dockershim-root-directory=/var/lib/dockershim
> --fail-swap-on=false
> --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
> --file-check-frequency=0s --healthz-bind-address= --healthz-port=0
> --host-ipc-sources=api --host-ipc-sources=file --host-network-sources=api
> --host-network-sources=file --host-pid-sources=api --host-pid-sources=file
> --hostname-override= --http-check-frequency=0s
> --image-service-endpoint=/var/run/dockershim.sock
> --iptables-masquerade-bit=0 --kubeconfig=/etc/origin/node/node.kubeconfig
> --max-pods=250 --network-plugin=cni --node-ip= --pod-infra-container-image=
> docker.io/openshift/origin-pod:v3.10.0
> --pod-manifest-path=/etc/origin/node/pods --port=10250 --read-only-port=0
> --register-node=true --root-dir=/var/lib/origin/openshift.local.volumes
> --rotate-certificates=true --tls-cert-file=
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
> --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
> --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
> --tls-cipher-suites=TLS_RSA_WITH_AES_128_GCM_SHA256
> --tls-cipher-suites=TLS_RSA_WITH_AES_256_GCM_SHA384
> --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA
> --tls-cipher-suites=TLS_RSA_WITH_AES_256_CBC_SHA
> --tls-min-version=VersionTLS12 --tls-private-key-file=
>
> Sep 11 10:35:17 ph67-dev-psh-oso310-master origin-node[21183]: E0911
&

Re: 3.10 openshift-ansible install is failing - cni not configured

2018-09-11 Thread Peter Heitman
getsockopt: connection refused
Sep 11 10:35:18 ph67-dev-psh-oso310-master origin-node[21183]: E0911
10:35:18.669169   21183 reflector.go:205]
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47:
Failed to list *v1.Pod: Get
https://ph67-dev-psh-oso310-master.pdx.hcl.com:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dph67-dev-psh-oso310-master=500=0:
dial tcp 10.93.233.126:8443: getsockopt: connection refused
Sep 11 10:35:18 ph67-dev-psh-oso310-master origin-node[21183]: E0911
10:35:18.670127   21183 reflector.go:205]
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461:
Failed to list *v1.Node: Get
https://ph67-dev-psh-oso310-master.pdx.hcl.com:8443/api/v1/nodes?fieldSelector=metadata.name%3Dph67-dev-psh-oso310-master=500=0:
dial tcp 10.93.233.126:8443: getsockopt: connection refused
Sep 11 10:35:19 ph67-dev-psh-oso310-master origin-node[21183]: E0911
10:35:19.669734   21183 reflector.go:205]
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452:
Failed to list *v1.Service: Get
https://ph67-dev-psh-oso310-master.pdx.hcl.com:8443/api/v1/services?limit=500=0:
dial tcp 10.93.233.126:8443: getsockopt: connection refused
Sep 11 10:35:19 ph67-dev-psh-oso310-master origin-node[21183]: E0911
10:35:19.670769   21183 reflector.go:205]
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47:
Failed to list *v1.Pod: Get
https://ph67-dev-psh-oso310-master.pdx.hcl.com:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dph67-dev-psh-oso310-master=500=0:
dial tcp 10.93.233.126:8443: getsockopt: connection refused
Sep 11 10:35:19 ph67-dev-psh-oso310-master origin-node[21183]: E0911
10:35:19.671644   21183 reflector.go:205]
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461:
Failed to list *v1.Node: Get
https://ph67-dev-psh-oso310-master.pdx.hcl.com:8443/api/v1/nodes?fieldSelector=metadata.name%3Dph67-dev-psh-oso310-master=500=0:
dial tcp 10.93.233.126:8443: getsockopt: connection refused


On Tue, Sep 11, 2018 at 10:41 AM Alexander Bartilla <
alexander.barti...@cloudwerkstatt.com> wrote:

> Hi Peter,
>
> Is there a reason behind pinning the release, image_tag and pkg_version
> variables to this release version? I would recommend you use just 3.10,
> this will ensure that you get the latest version of Openshift installed
>
> Futhermore I found several bugreports with this issue:
>
> https://github.com/openshift/openshift-ansible/issues/7967
> https://bugzilla.redhat.com/show_bug.cgi?id=1568583
> https://bugzilla.redhat.com/show_bug.cgi?id=1568450#c7
>
> Some more logs from the node would help to troubleshoot the problem.
>
> Best regards,
> Alexander
>
> On Tue, Sep 11, 2018 at 3:50 PM, Peter Heitman  wrote:
>
>> I am attempting to use the openshift-ansible installer for 3.10 to deploy
>> openshift on 1 master and 3 minions. I am using the same inventory I have
>> been using for 3.9 with the changes shown below. I'm consistently hitting a
>> problem with the control plane pods not appearing. Looking in to it, it
>> seems that the cni plugin is not being configured properly. From systemctl
>> status origin-node, I see the following:
>>
>> E0911 06:19:25.821170   18922 kubelet.go:2143] Container runtime network
>> not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker:
>> network plugin is not ready: cni config uninitialized
>>
>> Is there something I need to add to my 3.10 inventory to address this?
>> Are there other workarounds?
>>
>> - openshift_release=v3.9.0
>> + openshift_release=v3.10.0
>>
>> - openshift_image_tag=v3.9.0
>> - openshift_image_tag=v3.10.0
>> + openshift_pkg_version=-3.10.0
>> + openshift_pkg_version=-3.9.0
>>
>> - openshift_metrics_image_version=v3.9
>> + openshift_metrics_image_version=v3.10
>>
>> - [masters]
>> -  openshift_node_labels="{'node-role.kubernetes.io/master':
>> 'true', 'node-role.kubernetes.io/infra': 'true'}"
>> openshift_schedulable=true
>>
>> + [masters]
>> + 
>>
>> + [masters:vars]
>> + #openshift_node_group_name="node-config-master"
>> + openshift_node_group_name="node-config-master-infra"
>> + openshift_schedulable=true
>>
>> - [compute-nodes]
>> -  openshift_node_labels="{'node-role.kubernetes.io/compute':
>> 'true'}" openshift_schedulable=true
>> -  openshift_node_labels="{'node-role.kubernetes.io/compute':
>> 'true'}" openshift_schedulable=true
>> -  openshift_node_labels="{'node-role.kubernetes.io/compute':
>> 'true'}" openshift_schedulable=true
>>
>> + [compute-nodes]
>> + 
>> + 
>> + 
>>
>> + 

3.10 openshift-ansible install is failing - cni not configured

2018-09-11 Thread Peter Heitman
I am attempting to use the openshift-ansible installer for 3.10 to deploy
openshift on 1 master and 3 minions. I am using the same inventory I have
been using for 3.9 with the changes shown below. I'm consistently hitting a
problem with the control plane pods not appearing. Looking in to it, it
seems that the cni plugin is not being configured properly. From systemctl
status origin-node, I see the following:

E0911 06:19:25.821170   18922 kubelet.go:2143] Container runtime network
not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker:
network plugin is not ready: cni config uninitialized

Is there something I need to add to my 3.10 inventory to address this? Are
there other workarounds?

- openshift_release=v3.9.0
+ openshift_release=v3.10.0

- openshift_image_tag=v3.9.0
- openshift_image_tag=v3.10.0
+ openshift_pkg_version=-3.10.0
+ openshift_pkg_version=-3.9.0

- openshift_metrics_image_version=v3.9
+ openshift_metrics_image_version=v3.10

- [masters]
-  openshift_node_labels="{'node-role.kubernetes.io/master':
'true', 'node-role.kubernetes.io/infra': 'true'}" openshift_schedulable=true

+ [masters]
+ 

+ [masters:vars]
+ #openshift_node_group_name="node-config-master"
+ openshift_node_group_name="node-config-master-infra"
+ openshift_schedulable=true

- [compute-nodes]
-  openshift_node_labels="{'node-role.kubernetes.io/compute':
'true'}" openshift_schedulable=true
-  openshift_node_labels="{'node-role.kubernetes.io/compute':
'true'}" openshift_schedulable=true
-  openshift_node_labels="{'node-role.kubernetes.io/compute':
'true'}" openshift_schedulable=true

+ [compute-nodes]
+ 
+ 
+ 

+ [compute-nodes:vars]
+ openshift_node_group_name="node-config-compute"
+ openshift_schedulable=true
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Restricting access to some Routes

2018-08-30 Thread Peter Heitman
In my deployment there are 5 routes - two of them are from OpenShift
(docker-registry and registry-console) and three of them are specific to my
application. Of the 5, 4 of them are administrative and shouldn't be
accessed by just anyone on the Internet. One of my application's route is
required to be accessed by 'anyone' on the Internet.

My question is, what is the best practice to achieve this restriction? Is
there a way to set IP address or subnet restrictions on a route? Do I need
to set up separate nodes and separate routers so that I can use a firewall
to restrict access to the 4 routes and allow access to the Internet
service? Any suggestions?

Peter
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible/Origin 3.9 deployment now fails because "package(s) are available at a version that is higher than requested"

2018-08-20 Thread Peter Heitman
I agree with you. I've hit this same error when previous versions were
released. I'm not sure why defining the version we want to install (and
then using that version of the openshift ansible git) isn't sufficient. As
for installing the repo, I do this before I run the prerequisite playbook,
i.e. ansible all -i  -m yum -a
"name=centos-release-openshift-origin39 state=present"  --become. That
seems to resolve the issue.

On Mon, Aug 20, 2018 at 10:10 AM Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks Peter.
>
> Interestingly it looks like it’s Origin’s own “prerequisites.yml” playbook
> that’s adding the repo that’s causing problems. My instances don’t have
> this repo until I run that playbook.
>
> Why do I have to remove something that’s being added by the prerequisite
> playbook? Especially as my inventory explicitly states
> "openshift_release=v3.9”?
>
> If the answer is “do not run prerequisites.yml” what’s the point of it?
>
> I still wonder why this specific issue is actually an error? Shouldn’t it
> be installing specific version anyway? Shouldn’t it be error occur if there
> is no 3.9 package, not if there’s a 3.10 package?
>
> Incidentally, I’m using the ansible code from "openshift-ansible-3.9.40-1”.
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
> On 18 Aug 2018, at 13:36, Peter Heitman  wrote:
>
> See the recent thread "How to avoid upgrading to 3.10". The bottom line is
> to install the 3.9 specific repo. For CentOS that is
> centos-release-openshift-origin39
>
> On Sat, Aug 18, 2018, 2:44 AM Alan Christie <
> achris...@informaticsmatters.com> wrote:
>
>> HI,
>>
>> I’ve been deploying new clusters of Origin v3.9 using the official
>> Ansible playbook approach for a few weeks now, using what appear to be
>> perfectly reasonable base images on OpenStack and AWS. Then, this week,
>> with no other changes having been made, the deployment fails with this
>> message: -
>>
>> One or more checks failed
>>  check "package_version":
>>Some required package(s) are available at a version
>>that is higher than requested
>>  origin-3.10.0
>>  origin-node-3.10.0
>>  origin-master-3.10.0
>>This will prevent installing the version you requested.
>>Please check your enabled repositories or adjust
>> openshift_release.
>>
>> I can avoid the error, and deploy what appears to be a perfectly
>> functional 3.9, if I add *package_version* to *openshift_disable_check*
>> in the inventory the deployment. But this is not the right way to deal with
>> this sort of error.
>>
>> Q1) How does one correctly address this error?
>>
>> Q2) Out of interest … why is this specific issue an error? I’ve
>> instructed the playbook to instal v3.9. I don't care if there is a 3.10
>> release available - I do care if there is not a 3.9. Shouldn’t the error
>> occur if there is no 3.9 package, not if there’s a 3.10 package?
>>
>> Alan Christie
>> Informatics Matters Ltd.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible/Origin 3.9 deployment now fails because "package(s) are available at a version that is higher than requested"

2018-08-18 Thread Peter Heitman
See the recent thread "How to avoid upgrading to 3.10". The bottom line is
to install the 3.9 specific repo. For CentOS that is
centos-release-openshift-origin39

On Sat, Aug 18, 2018, 2:44 AM Alan Christie <
achris...@informaticsmatters.com> wrote:

> HI,
>
> I’ve been deploying new clusters of Origin v3.9 using the official Ansible
> playbook approach for a few weeks now, using what appear to be perfectly
> reasonable base images on OpenStack and AWS. Then, this week, with no other
> changes having been made, the deployment fails with this message: -
>
> One or more checks failed
>  check "package_version":
>Some required package(s) are available at a version
>that is higher than requested
>  origin-3.10.0
>  origin-node-3.10.0
>  origin-master-3.10.0
>This will prevent installing the version you requested.
>Please check your enabled repositories or adjust
> openshift_release.
>
> I can avoid the error, and deploy what appears to be a perfectly
> functional 3.9, if I add *package_version* to *openshift_disable_check*
> in the inventory the deployment. But this is not the right way to deal with
> this sort of error.
>
> Q1) How does one correctly address this error?
>
> Q2) Out of interest … why is this specific issue an error? I’ve instructed
> the playbook to instal v3.9. I don't care if there is a 3.10 release
> available - I do care if there is not a 3.9. Shouldn’t the error occur if
> there is no 3.9 package, not if there’s a 3.10 package?
>
> Alan Christie
> Informatics Matters Ltd.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to avoid upgrading to 3.10?

2018-08-14 Thread Peter Heitman
I use ansible to deploy OpenShift. All of my current deployments are 3.9
and I'd like to stay on 3.9 until we can do enough testing on 3.10 to be
comfortable upgrading.

Can someone point me to any documentation on how to avoid the forced
upgrade to 3.10 when I deploy a new instance of OpenShift? I currently
checkout release-3.9 of the ansible scripts:

git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.9

My inventory has the variables

openshift_release=v3.9
openshift_pkg_version=-3.9.0

and yet I get the error below. How do I stay on 3.9?

Failure summary:


  1. Hosts:ph-dev-pshtest-master.pdx.hcl.com,
ph-dev-pshtest-minion1.pdx.hcl.com,
ph-dev-pshtest-minion2.pdx.hcl.com, ph-dev-pshtest-minion3.pdx.hcl.com
 Play: OpenShift Health Checks
 Task: Run health checks (install) - EL
 Message:  One or more checks failed
 Details:  check "package_version":
   Some required package(s) are available at a version
   that is higher than requested
 origin-3.10.0
 origin-node-3.10.0
 origin-master-3.10.0
   This will prevent installing the version you requested.
   Please check your enabled repositories or adjust
openshift_release.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Running OpenShift with geo-redundancy?

2018-08-08 Thread Peter Heitman
Thanks for the reply. I have been thinking about that option and will
explore it further.

On Wed, Aug 8, 2018 at 3:41 PM  wrote:

> I asked the product support not too long ago - the official red hat
> answer was: run two clusters with external load balancing. Especially if
> latency is more than a few ms
>
> Greetings
> Klaas
>
>
>
> On 06.08.2018 14:32, Peter Heitman wrote:
> > Does anyone have any experience running OpenShift with geo-redundancy?
> > I'm guessing that if deployed on a platform like AWS that deploying
> > across multiple availability zones is sufficient (is it?) but when
> > deploying in our own datacenter we would need geo-redundancy to
> > guarantee the availability of our service. Is that possible? What are
> > the issues?
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Running OpenShift with geo-redundancy?

2018-08-06 Thread Peter Heitman
Does anyone have any experience running OpenShift with geo-redundancy? I'm
guessing that if deployed on a platform like AWS that deploying across
multiple availability zones is sufficient (is it?) but when deploying in
our own datacenter we would need geo-redundancy to guarantee the
availability of our service. Is that possible? What are the issues?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


enabling unsafe sysctls in OSO

2018-07-26 Thread Peter Heitman
I need to set some sysctls in a couple of my DeploymentConfigs for their
pods. I have followed
https://docs.openshift.com/container-platform/3.9/admin_guide/sysctls.html but
when I deploy the pods and exec in to the container, the sysctls are not
set. My dc file contains:

objects:
- kind: DeploymentConfig
  apiVersion: v1
  metadata:
name: cl
annotations:
  security.alpha.kubernetes.io/sysctls:
net.ipv4.ip_local_port_range=9000 65500
  security.alpha.kubernetes.io/unsafe-sysctls:
net.core.rmem_default=4194304,net.core.rmem_max=16777216,net.core.wmem_default=262144,net.core.wmem_max=16777216,net.ipv4.tcp_rmem=4096
87380 16777216,net.ipv4.tcp_wmem=4096 65536 16777216
  spec:

and I have updated the node-config.yaml file with

kubeletArguments:
  node-labels:
  - role=app
  experimental-allowed-unsafe-sysctls:
  - "net.core.*,net.ipv4.tcp_rmem,net.ipv4.tcp_wmem"

On the minions I have the same values set for those sysctls at the host
level.

Any ideas on why this is successfully creating the pods but not setting the
sysctls? Has anyone gotten this to work?

Peter
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users