Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
how do i label a master?  When i create PVCs it switches between 1c and
1a.  look on the master I see:

Creating volume for PVC "wtf3"; chose zone="us-east-1c" from
zones=["us-east-1a" "us-east-1c"]

Where did us-east-1c come from???

On Fri, Jan 5, 2018 at 11:07 PM Hemant Kumar  wrote:

> Both nodes and masters. The tag information is picked from master
> itself(Where controller-manager is running) and then openshift uses same
> value to find all nodes in the cluster.
>
>
>
>
> On Fri, Jan 5, 2018 at 10:26 PM, Marc Boorshtein 
> wrote:
>
>> node and masters?  or just nodes? (sounded like just nodes from the docs)
>>
>> On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar  wrote:
>>
>>> Make sure that you configure ALL instances in the cluster with tag
>>> "KubernetesCluster": "value". The value of the tag for key
>>> "KubernetesCluster" should be same for all instances in the cluster. You
>>> can choose any string you want for value.
>>>
>>> You will probably have to restart openshift controller-manager after the
>>> change at very minimum.
>>>
>>>
>>>
>>> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein 
>>> wrote:
>>>
 Hello,

 I have a brand new Origin 3.6 running on AWS, the master and all nodes
 are in us-east-1a but whenever I try to have AWS create a new volume, it
 puts it in us-east-1c so then no one can access it and all my nodes go into
 a permanent pending state because NoVolumeZoneConflict.  Looking at
 aws.conf it states us-east-1a.  What am I missing?

 Thanks

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Hemant Kumar
Both nodes and masters. The tag information is picked from master
itself(Where controller-manager is running) and then openshift uses same
value to find all nodes in the cluster.




On Fri, Jan 5, 2018 at 10:26 PM, Marc Boorshtein 
wrote:

> node and masters?  or just nodes? (sounded like just nodes from the docs)
>
> On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar  wrote:
>
>> Make sure that you configure ALL instances in the cluster with tag
>> "KubernetesCluster": "value". The value of the tag for key
>> "KubernetesCluster" should be same for all instances in the cluster. You
>> can choose any string you want for value.
>>
>> You will probably have to restart openshift controller-manager after the
>> change at very minimum.
>>
>>
>>
>> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein 
>> wrote:
>>
>>> Hello,
>>>
>>> I have a brand new Origin 3.6 running on AWS, the master and all nodes
>>> are in us-east-1a but whenever I try to have AWS create a new volume, it
>>> puts it in us-east-1c so then no one can access it and all my nodes go into
>>> a permanent pending state because NoVolumeZoneConflict.  Looking at
>>> aws.conf it states us-east-1a.  What am I missing?
>>>
>>> Thanks
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
node and masters?  or just nodes? (sounded like just nodes from the docs)

On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar  wrote:

> Make sure that you configure ALL instances in the cluster with tag
> "KubernetesCluster": "value". The value of the tag for key
> "KubernetesCluster" should be same for all instances in the cluster. You
> can choose any string you want for value.
>
> You will probably have to restart openshift controller-manager after the
> change at very minimum.
>
>
>
> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein 
> wrote:
>
>> Hello,
>>
>> I have a brand new Origin 3.6 running on AWS, the master and all nodes
>> are in us-east-1a but whenever I try to have AWS create a new volume, it
>> puts it in us-east-1c so then no one can access it and all my nodes go into
>> a permanent pending state because NoVolumeZoneConflict.  Looking at
>> aws.conf it states us-east-1a.  What am I missing?
>>
>> Thanks
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Hemant Kumar
Make sure that you configure ALL instances in the cluster with tag
"KubernetesCluster": "value". The value of the tag for key
"KubernetesCluster" should be same for all instances in the cluster. You
can choose any string you want for value.

You will probably have to restart openshift controller-manager after the
change at very minimum.



On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein 
wrote:

> Hello,
>
> I have a brand new Origin 3.6 running on AWS, the master and all nodes are
> in us-east-1a but whenever I try to have AWS create a new volume, it puts
> it in us-east-1c so then no one can access it and all my nodes go into a
> permanent pending state because NoVolumeZoneConflict.  Looking at aws.conf
> it states us-east-1a.  What am I missing?
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
Hello,

I have a brand new Origin 3.6 running on AWS, the master and all nodes are
in us-east-1a but whenever I try to have AWS create a new volume, it puts
it in us-east-1c so then no one can access it and all my nodes go into a
permanent pending state because NoVolumeZoneConflict.  Looking at aws.conf
it states us-east-1a.  What am I missing?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[4]: nginx in front of haproxy ?

2018-01-05 Thread Aleksandar Lazic

Hi Fabio.

-- Originalnachricht --
Von: "Fabio Martinelli" 
An: "Aleksandar Lazic" 
Gesendet: 04.01.2018 10:34:03
Betreff: Re: Re[2]: nginx in front of haproxy ?


Thanks Joel,
that's correct, in this particular case it is not nginx in front of our 
3 haproxy but nginx in front of our 3 Web Console ; I got confused 
because in our nginx we have other rules pointing to the 3 haproxy, for 
instance to manage the 'metrics.hosting.wfp.org' case



Thanks Aleksandar,
my inventory sets :

openshift_master_default_subdomain=hosting.wfp.org
​openshift_master_cluster_public_hostname={{openshift_master_default_subdomain}}

maybe I had to be more explicit as you advice by directly setting :
openshift_master_cluster_public_hostname=hosting.wfp.org

I would do

`openshift_master_cluster_public_hostname=master.{{openshift_master_default_subdomain}}`

The ip for `master.hosting.wfp.org` should be a vip.

The domain alone is not enough you need a ip e. g.:  10.11.40.99, if you 
have not setuped a wild-card dns entry for this domain.


https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example#L304-L311

anyway I'm afraid to run Ansible again because of the 2 GlusterFS we 
run, 1 for general data, 1 for the internal registry ;
installing GlusterFS was the hardest part for us, maybe is there a way 
to skip the GlusterFS part without modifying the inventory file ?

Well I don't know.
How about to show us your inventory file with removed sensible data.


best regards,
Fabio


Best regards
Aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deployment to OpenStack

2018-01-05 Thread Joel Pearson
Hi Tim,

The DNS needs to still be there because the master server uses those host
names to communicate with the nodes in the cluster. For example I
discovered that in the UI when you look at the logs or the terminal, the
master api server opens up a connection to the node in question via the DNS
name. In my current setup I’m using a non internet resolvable DNS name,
openshift.local or something like that. Then I maintain a real DNS domain
to point to the master api server and the infra node.

So, if you can still view logs in the UI, then I’d say the DNS is working
ok.

I’ll have to try this non-floating ip mode in the future, but it might be a
month or so away.
On Fri, 5 Jan 2018 at 9:30 pm, Tim Dudgeon  wrote:

> OK, so I tried setting `openstack_use_bastion: True`. Servers were
> provisioned OK. Public IP addresses were only applied to the infra and dns
> nodes (not master).
>
> But the inventory/hosts file that gets auto-generated by this process
> still contains the "public" hostnames that can't be reached, even if put
> into DNS. Also, I expected to see a bastion node, but none was created.
>
> I find the docs for this a bit baffling. Is there anyone on this list who
> was involved with creating this who can help get this straight?
> On 04/01/18 23:13, Joel Pearson wrote:
>
> Hi Tim,
>
> Yes, I only discovered what the basion setting did by looking at the heat
> template, as I was going to try and remove the need for the bastion by
> myself.
>
> I found this line in the heat template:
>
> https://github.com/openshift/openshift-ansible-contrib/blob/master/roles/openstack-stack/templates/heat_stack.yaml.j2#L75
>
> I don't know what provider_network does. But you might want to grep around
> the repo chasing down those settings to see if it suits your purposes. It
> seems a bit undocumented.
>
> In regards to creating private floating ip's, this is what we did for our
> on-premise openstack, because we wanted to have floating ip's that allowed
> other computers outside the openstack network to be able connect to
> individual servers.
>
> I don't know what sort of privileges you need to run this command, so it
> might not work for you.
>
> openstack network create  --external  --provider-physical-network flat
> --provider-network-type flat public
> openstack subnet create --network public --allocation-pool
> start=10.2.100.1,end=10.2.100.254  --dns-nameserver 10.2.0.1  --gateway
> 10.2.0.1 --subnet-range  10.2.0.0/16 public
>
> Instead of public, you could call it something else.
>
> So the end result of that command was that when openshift ansible asked
> for a floating ip, we'd get an IP address in the range of 10.2.100.1-254.
>
> Hope it helps.
>
> Thanks,
>
> Joel
>
> On Fri, Jan 5, 2018 at 8:18 AM Tim Dudgeon  wrote:
>
>> Joel,
>> Thanks for that.
>> I had seen this but didn't really understand what it meant.
>> Having read through it again I still don't!
>> I'll give it a try tomorrow and see what happens.
>>
>> As for the warning about scaling up/down then yes, that is a big concern.
>> That's the whole point of getting automation in place.
>> So if anyone can shed any light on this then please do so!
>>
>> Could you explain more about 'an alternative is to create a floating ip
>> range that uses private non-routable ip addressees'?
>>
>>
>> On 04/01/18 20:17, Joel Pearson wrote:
>>
>> I had exactly the same concern and I discovered that inside the heat
>> template there is a bastion mode, which once enabled it doesn’t use
>> floating ip’s any more.
>>
>> Have a look at
>> https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/advanced-configuration.md
>>
>> I think you want openstack_use_bastion: True but I am yet to test it out
>> so I’d recommend checking the heat template to see if it does what I think
>> it does.
>>
>> At the bottom of that advanced page it mentions that in bastion mode
>> scale up doesn’t work for some reason, so I don’t know if that matters for
>> you.
>>
>> Otherwise an alternative is to create a floating ip range that uses
>> private non-routable ip addressees. That’s what we’re using in our
>> on-premise OpenStack. But only because we hadn’t discovered the bastion
>> mode at the time.
>>
>> Hope that helps.
>> On Fri, 5 Jan 2018 at 4:10 am, Tim Dudgeon  wrote:
>>
>>> I hope this is the right place to ask questions about the
>>> openshift/openshift-ansible-contrib GitHub repo, and specifically the
>>> playbooks for installing OpenShift on OpenStack:
>>>
>>> https://github.com/openshift/openshift-ansible-contrib/tree/master/playbooks/provisioning/openstack
>>> If not then please redirect me.
>>>
>>> By following the instructions in that link I successfully ran a basic
>>> deployment that involved provisioning the OpenStack servers and the
>>> deploying OpenShift using the byo config.yaml playbook. But in doing so
>>> it's immediately obvious that this approach 

Re: Deployment to OpenStack

2018-01-05 Thread Tim Dudgeon
OK, so I tried setting `openstack_use_bastion: True`. Servers were 
provisioned OK. Public IP addresses were only applied to the infra and 
dns nodes (not master).


But the inventory/hosts file that gets auto-generated by this process 
still contains the "public" hostnames that can't be reached, even if put 
into DNS. Also, I expected to see a bastion node, but none was created.


I find the docs for this a bit baffling. Is there anyone on this list 
who was involved with creating this who can help get this straight?


On 04/01/18 23:13, Joel Pearson wrote:

Hi Tim,

Yes, I only discovered what the basion setting did by looking at the 
heat template, as I was going to try and remove the need for the 
bastion by myself.


I found this line in the heat template:
https://github.com/openshift/openshift-ansible-contrib/blob/master/roles/openstack-stack/templates/heat_stack.yaml.j2#L75

I don't know what provider_network does. But you might want to grep 
around the repo chasing down those settings to see if it suits your 
purposes. It seems a bit undocumented.


In regards to creating private floating ip's, this is what we did for 
our on-premise openstack, because we wanted to have floating ip's that 
allowed other computers outside the openstack network to be able 
connect to individual servers.


I don't know what sort of privileges you need to run this command, so 
it might not work for you.


openstack network create  --external --provider-physical-network flat 
--provider-network-type flat public
openstack subnet create --network public --allocation-pool 
start=10.2.100.1,end=10.2.100.254 --dns-nameserver 10.2.0.1  --gateway 
10.2.0.1 --subnet-range 10.2.0.0/16  public


Instead of public, you could call it something else.

So the end result of that command was that when openshift ansible 
asked for a floating ip, we'd get an IP address in the range of 
10.2.100.1-254.


Hope it helps.

Thanks,

Joel

On Fri, Jan 5, 2018 at 8:18 AM Tim Dudgeon > wrote:


Joel,

Thanks for that.
I had seen this but didn't really understand what it meant.
Having read through it again I still don't!
I'll give it a try tomorrow and see what happens.

As for the warning about scaling up/down then yes, that is a big
concern. That's the whole point of getting automation in place.
So if anyone can shed any light on this then please do so!

Could you explain more about 'an alternative is to create a
floating ip range that uses private non-routable ip addressees'?


On 04/01/18 20:17, Joel Pearson wrote:

I had exactly the same concern and I discovered that inside the
heat template there is a bastion mode, which once enabled it
doesn’t use floating ip’s any more.

Have a look at

https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/advanced-configuration.md

I think you want openstack_use_bastion: True but I am yet to test
it out so I’d recommend checking the heat template to see if it
does what I think it does.

At the bottom of that advanced page it mentions that in bastion
mode scale up doesn’t work for some reason, so I don’t know if
that matters for you.

Otherwise an alternative is to create a floating ip range that
uses private non-routable ip addressees. That’s what we’re using
in our on-premise OpenStack. But only because we hadn’t
discovered the bastion mode at the time.

Hope that helps.
On Fri, 5 Jan 2018 at 4:10 am, Tim Dudgeon > wrote:

I hope this is the right place to ask questions about the
openshift/openshift-ansible-contrib GitHub repo, and
specifically the
playbooks for installing OpenShift on OpenStack:

https://github.com/openshift/openshift-ansible-contrib/tree/master/playbooks/provisioning/openstack
If not then please redirect me.

By following the instructions in that link I successfully ran
a basic
deployment that involved provisioning the OpenStack servers
and the
deploying OpenShift using the byo config.yaml playbook. But
in doing so
it's immediately obvious that this approach is not really
viable as
public IP addresses are assigned to every node. It should only be
necessary to have public IP addresses for the master and the
infrastructure node hosting the router.

My expectation is that the best way to handle this would be to:

1. provision the basic openstack networking environment plus
a bastion
node from outside the openstack environment
2. from that bastion node provision the nodes that will form the
OpenShift cluster and deploy OpenShift to those.

Are there any examples along those lines?