Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Jan Provaznik

On 04/28/2014 10:05 PM, Jay Dobies wrote:

We may want to consider making use of Heat outputs for this.


This was my first thought as well. stack-show returns a JSON document
that would be easy enough to parse through instead of having it in two
places.


Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
   keystone_endpoint:
 Fn::Join:
   - ''
   - - http://;
 - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
 - :
 - {Ref: KeystoneEndpointPort} # thats a parameter
 - /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.



The output endpoint list would be quite long, it would have to contain 
full list of all possible services (even if a service is not included in 
an image) + SSL URI for each port.


It might be better to get haproxy ports from template params (which 
should be available as stack.params) and define only virtual IP in 
stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure 
if we would have to change os-cloud-config for LBaaS or not. My first 
thought was that VIP and port are only bits which should vary, so 
resulting URI should be same in both cases.




2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be
done in keystone's os-refresh-config script. This script would have to
be called only on one of nodes in cluster and only once (though we
already do similar check for other services - mysql/rabbitmq, so I don't
think this is a problem). Then this script can easily get list of
haproxy ports from heat metadata. This looks like more attractive option
to me - it eliminates an extra post-create config step.


Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.



Related to Keystone setup is also the plan around keys/cert setup
described here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html

But I think this plan would remain same no matter which of the options
above would be used.


What do you think?

Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Clint Byrum
Excerpts from Jan Provaznik's message of 2014-05-05 01:10:56 -0700:
 On 04/28/2014 10:05 PM, Jay Dobies wrote:
  We may want to consider making use of Heat outputs for this.
 
  This was my first thought as well. stack-show returns a JSON document
  that would be easy enough to parse through instead of having it in two
  places.
 
  Rather than assuming hard coding, create an output on the overcloud
  template that is something like 'keystone_endpoint'. It would look
  something like this:
 
  Outputs:
 keystone_endpoint:
   Fn::Join:
 - ''
 - - http://;
   - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
   - :
   - {Ref: KeystoneEndpointPort} # thats a parameter
   - /v2.0
 
 
  These are then made available via heatclient as stack.outputs in
  'stack-show'.
 
  That way as we evolve new stacks that have different ways of controlling
  the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
  for each one.
 
 
 The output endpoint list would be quite long, it would have to contain 
 full list of all possible services (even if a service is not included in 
 an image) + SSL URI for each port.
 
 It might be better to get haproxy ports from template params (which 
 should be available as stack.params) and define only virtual IP in 
 stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure 
 if we would have to change os-cloud-config for LBaaS or not. My first 
 thought was that VIP and port are only bits which should vary, so 
 resulting URI should be same in both cases.
 

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Robert Collins
On 6 May 2014 06:13, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Jan Provaznik's message of 2014-05-05 01:10:56 -0700:
 On 04/28/2014 10:05 PM, Jay Dobies wrote:
  We may want to consider making use of Heat outputs for this.
..
 The output endpoint list would be quite long, it would have to contain
 full list of all possible services (even if a service is not included in
 an image) + SSL URI for each port.

 It might be better to get haproxy ports from template params (which
 should be available as stack.params) and define only virtual IP in
 stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure
 if we would have to change os-cloud-config for LBaaS or not. My first
 thought was that VIP and port are only bits which should vary, so
 resulting URI should be same in both cases.


 +1

I think outputs are good here, but indeed we should not be exposing
the control plane innards: thats what the virtual IP is for - lets
export that out of Heat, along with the port #, and thats it.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-28 Thread Jay Dobies

We may want to consider making use of Heat outputs for this.


This was my first thought as well. stack-show returns a JSON document 
that would be easy enough to parse through instead of having it in two 
places.



Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
   keystone_endpoint:
 Fn::Join:
   - ''
   - - http://;
 - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
 - :
 - {Ref: KeystoneEndpointPort} # thats a parameter
 - /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.



2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be
done in keystone's os-refresh-config script. This script would have to
be called only on one of nodes in cluster and only once (though we
already do similar check for other services - mysql/rabbitmq, so I don't
think this is a problem). Then this script can easily get list of
haproxy ports from heat metadata. This looks like more attractive option
to me - it eliminates an extra post-create config step.


Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.



Related to Keystone setup is also the plan around keys/cert setup
described here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options
above would be used.


What do you think?

Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

I am somewhat hesitant to bring up the stunnel topic in this thread, but it 
needs to be considered in that an endpoint naming solution and a certificate 
creation/distribution solution needs to consider both the haproxy and stunnel 
requirements because there are many similarities. I am currently looking at a 
DevTest deployment that includes stunnel on one node and am trying to figure 
out how to modify all of the configuration files in OpenStack that reference 
the Keystone IP address and the hard coded ports 5000 and 35357 to make use 
of the SSL hardened stunnel proxy.

Regards,

Mark


-Original Message-
From: Jan Provazník [mailto:jprov...@redhat.com] 
Sent: Friday, April 25, 2014 6:31 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

Hello,
one of missing bits for running multiple control nodes in Overcloud is setting 
up endpoints in Keystone to point to HAProxy which will listen on a virtual IP 
and not-standard ports.

HAProxy ports are defined in heat template, e.g.:

 haproxy:
   nodes:
   - name: control1
 ip: 192.0.2.5
   - name: control2
 ip: 192.0.2.6
   services:
   - name: glance_api_cluster
 proxy_ip: 192.0.2.254 (=virtual ip)
 proxy_port: 9293
 port:9292


means that Glance's Keystone endpoint should be set to:
http://192.0.2.254:9293/

ATM Keystone setup is done from devtest_overcloud.sh when Overcloud stack 
creation successfully completes. I wonder what of the following options how to 
set up endpoints in HA mode, is preferred by community?:
1) leave it in post-stack-create phase and extend init-keystone script. 
But then how to deal with list of not-standard ports (proxy_port in example 
above):
   1a) consider these not-standard ports as static and just hardcode them 
(similar to what we do with SSL ports already). But ports would be hardcoded on 
2 places (heat template and this script). If a user changes them in heat 
template, he has to change them in init-keystone script too.
   2b) init-keystone script would fetch list of ports from heat stack 
description (if it's possible?)

Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
So alternative to extending init-keystone script would be implement it as part 
of the blueprint, anyway the concept of keeping Keystone setup in 
post-stack-create phase remains.


2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be done in 
keystone's os-refresh-config script. This script would have to be called only 
on one of nodes in cluster and only once (though we already do similar check 
for other services - mysql/rabbitmq, so I don't think this is a problem). Then 
this script can easily get list of haproxy ports from heat metadata. This looks 
like more attractive option to me - it eliminates an extra post-create config 
step.

Related to Keystone setup is also the plan around keys/cert setup described 
here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options above 
would be used.


What do you think?

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Clint Byrum
Excerpts from Jan Provazník's message of 2014-04-25 06:30:31 -0700:
 Hello,
 one of missing bits for running multiple control nodes in Overcloud is 
 setting up endpoints in Keystone to point to HAProxy which will listen 
 on a virtual IP and not-standard ports.
 
 HAProxy ports are defined in heat template, e.g.:
 
  haproxy:
nodes:
- name: control1
  ip: 192.0.2.5
- name: control2
  ip: 192.0.2.6
services:
- name: glance_api_cluster
  proxy_ip: 192.0.2.254 (=virtual ip)
  proxy_port: 9293
  port:9292
 
 
 means that Glance's Keystone endpoint should be set to:
 http://192.0.2.254:9293/
 
 ATM Keystone setup is done from devtest_overcloud.sh when Overcloud 
 stack creation successfully completes. I wonder what of the following 
 options how to set up endpoints in HA mode, is preferred by community?:
 1) leave it in post-stack-create phase and extend init-keystone script. 
 But then how to deal with list of not-standard ports (proxy_port in 
 example above):
1a) consider these not-standard ports as static and just hardcode 
 them (similar to what we do with SSL ports already). But ports would be 
 hardcoded on 2 places (heat template and this script). If a user changes 
 them in heat template, he has to change them in init-keystone script too.
2b) init-keystone script would fetch list of ports from heat stack 
 description (if it's possible?)
 
 Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
 So alternative to extending init-keystone script would be implement it 
 as part of the blueprint, anyway the concept of keeping Keystone setup 
 in post-stack-create phase remains.
 

We may want to consider making use of Heat outputs for this.

Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
  keystone_endpoint:
Fn::Join:
  - ''
  - - http://;
- {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
- :
- {Ref: KeystoneEndpointPort} # thats a parameter
- /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.

 
 2) do Keystone setup from inside Overcloud:
 Extend keystone element, steps done in init-keystone script would be 
 done in keystone's os-refresh-config script. This script would have to 
 be called only on one of nodes in cluster and only once (though we 
 already do similar check for other services - mysql/rabbitmq, so I don't 
 think this is a problem). Then this script can easily get list of 
 haproxy ports from heat metadata. This looks like more attractive option 
 to me - it eliminates an extra post-create config step.

Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.

 
 Related to Keystone setup is also the plan around keys/cert setup 
 described here:
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
 But I think this plan would remain same no matter which of the options 
 above would be used.
 
 
 What do you think?
 
 Jan
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev