Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
how do i label a master? When i create PVCs it switches between 1c and 1a. look on the master I see: Creating volume for PVC "wtf3"; chose zone="us-east-1c" from zones=["us-east-1a" "us-east-1c"] Where did us-east-1c come from??? On Fri, Jan 5, 2018 at 11:07 PM Hemant Kumar

Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Hemant Kumar
Both nodes and masters. The tag information is picked from master itself(Where controller-manager is running) and then openshift uses same value to find all nodes in the cluster. On Fri, Jan 5, 2018 at 10:26 PM, Marc Boorshtein wrote: > node and masters? or just

Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
node and masters? or just nodes? (sounded like just nodes from the docs) On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar wrote: > Make sure that you configure ALL instances in the cluster with tag > "KubernetesCluster": "value". The value of the tag for key >

Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Hemant Kumar
Make sure that you configure ALL instances in the cluster with tag "KubernetesCluster": "value". The value of the tag for key "KubernetesCluster" should be same for all instances in the cluster. You can choose any string you want for value. You will probably have to restart openshift

OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
Hello, I have a brand new Origin 3.6 running on AWS, the master and all nodes are in us-east-1a but whenever I try to have AWS create a new volume, it puts it in us-east-1c so then no one can access it and all my nodes go into a permanent pending state because NoVolumeZoneConflict. Looking at

Re[4]: nginx in front of haproxy ?

2018-01-05 Thread Aleksandar Lazic
Hi Fabio. -- Originalnachricht -- Von: "Fabio Martinelli" An: "Aleksandar Lazic" Gesendet: 04.01.2018 10:34:03 Betreff: Re: Re[2]: nginx in front of haproxy ? Thanks Joel, that's correct, in this particular case it is not nginx in

Re: Deployment to OpenStack

2018-01-05 Thread Joel Pearson
Hi Tim, The DNS needs to still be there because the master server uses those host names to communicate with the nodes in the cluster. For example I discovered that in the UI when you look at the logs or the terminal, the master api server opens up a connection to the node in question via the DNS

Re: Deployment to OpenStack

2018-01-05 Thread Tim Dudgeon
OK, so I tried setting `openstack_use_bastion: True`. Servers were provisioned OK. Public IP addresses were only applied to the infra and dns nodes (not master). But the inventory/hosts file that gets auto-generated by this process still contains the "public" hostnames that can't be reached,