Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-26 Thread Robert Collins
On 26 May 2014 17:20, Gregory Haynes  wrote:

> One other, separate issue with letting external SSL pass through to your
> backends has to do with secutity: Your app servers (or in our case
> control nodes) generally have a larger attack surface and are more
> distributed than your load balancers (or an SSL endpoint placed infront
> of them). Additionally, compromise of an external-facing SSL cert is far
> worse than an internal-only SSL cert which could be made backend-server
> specific.
>
> I agree that re-encryption is not useful with our current setup, though:
> It would occur on a control node which removes the security benefits (I
> still wanted to make sure this point is made :)).

We should capture that nuance in the spec, and in the (related)
multiple-hypervisors-for-deployments spec where I pointed out similar
security concerns earlier today.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-25 Thread Gregory Haynes
Excerpts from Robert Collins's message of 2014-05-25 23:12:26 +:
> On 23 May 2014 04:57, Gregory Haynes  wrote:
> >>
> >> Eventually we may need to scale traffic beyond one HAProxy, at which
> >> point we'll need to bring something altogether more sophisticated in -
> >> lets design that when we need it.
> >> Sooner than that we're likely going to need to scale load beyond one
> >> control plane server at which point the HAProxy VIP either needs to be
> >> distributed (so active-active load receiving) or we need to go
> >> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
> >> localhost bound service.
> >
> > Putting SSL termination behind HAProxy seems odd. Typically your load
> > balancer wants to be able to grok the traffic sent though it which is
> 
> Not really :). There is a sophistication curve - yes, but generally
> load balancers don't need to understand the traffic *except* when the
> application servers they are sending to have locality of reference
> performance benefits from clustered requests. (e.g. all requests from
> user A on server Z will hit a local cache of user metadata as long as
> they are within 5 seconds). Other than that, load balancers care about
> modelling server load to decide where to send traffic).
> 
> SSL is a particularly interesting thing because you know that all
> requests from that connection are from one user - its end to end
> whereas HTTP can be multiplexed by intermediaries. This means that
> while you don't know that 'all user A's requests' are on the one
> socket, you do know that all requests on that socket are from user A.
> 
> So for our stock - and thus probably most common - API clients we have
> the following characteristics:
>  - single threaded clients
>  - one socket (rather than N)
> 
> Combine these with SSL and clearly whatever efficiency we *can* get
> from locality of reference, we will get just by taking SSL and
> backending it to one backend. That backend might itself be haproxy
> managing local load across local processes but there is no reason to
> expose the protocol earlier.

This is a good point and I agree that performance-wise there is not an
issue here.

> 
> > not possible in this setup. For an environment where sending unencrypted
> > traffic across the internal work is not allowed I agree with Mark's
> > suggestion of re-encrypting for internal traffic, but IMO it should
> > still pass through the load balancer unencrypted. Basically:
> > User -> External SSL Terminate -> LB -> SSL encrypt -> control plane
> 
> I think this is wasted CPU cycles given the characteristics of the
> APIs we're balancing. We have four protocols that need VIP usage AIUI:
> 

One other, separate issue with letting external SSL pass through to your
backends has to do with secutity: Your app servers (or in our case
control nodes) generally have a larger attack surface and are more
distributed than your load balancers (or an SSL endpoint placed infront
of them). Additionally, compromise of an external-facing SSL cert is far
worse than an internal-only SSL cert which could be made backend-server
specific.

I agree that re-encryption is not useful with our current setup, though:
It would occur on a control node which removes the security benefits (I
still wanted to make sure this point is made :)).

TL;DR - +1 on the 'User -> haproxy -> ssl endpoint -> app' design.

Thanks,
Greg

-- 
Gregory Haynes
g...@greghaynes.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-25 Thread Robert Collins
On 23 May 2014 04:57, Gregory Haynes  wrote:
>>
>> Eventually we may need to scale traffic beyond one HAProxy, at which
>> point we'll need to bring something altogether more sophisticated in -
>> lets design that when we need it.
>> Sooner than that we're likely going to need to scale load beyond one
>> control plane server at which point the HAProxy VIP either needs to be
>> distributed (so active-active load receiving) or we need to go
>> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
>> localhost bound service.
>
> Putting SSL termination behind HAProxy seems odd. Typically your load
> balancer wants to be able to grok the traffic sent though it which is

Not really :). There is a sophistication curve - yes, but generally
load balancers don't need to understand the traffic *except* when the
application servers they are sending to have locality of reference
performance benefits from clustered requests. (e.g. all requests from
user A on server Z will hit a local cache of user metadata as long as
they are within 5 seconds). Other than that, load balancers care about
modelling server load to decide where to send traffic).

SSL is a particularly interesting thing because you know that all
requests from that connection are from one user - its end to end
whereas HTTP can be multiplexed by intermediaries. This means that
while you don't know that 'all user A's requests' are on the one
socket, you do know that all requests on that socket are from user A.

So for our stock - and thus probably most common - API clients we have
the following characteristics:
 - single threaded clients
 - one socket (rather than N)

Combine these with SSL and clearly whatever efficiency we *can* get
from locality of reference, we will get just by taking SSL and
backending it to one backend. That backend might itself be haproxy
managing local load across local processes but there is no reason to
expose the protocol earlier.

> not possible in this setup. For an environment where sending unencrypted
> traffic across the internal work is not allowed I agree with Mark's
> suggestion of re-encrypting for internal traffic, but IMO it should
> still pass through the load balancer unencrypted. Basically:
> User -> External SSL Terminate -> LB -> SSL encrypt -> control plane

I think this is wasted CPU cycles given the characteristics of the
APIs we're balancing. We have four protocols that need VIP usage AIUI:

HTTP API
HTTP Data (Swift only atm)
AMQP
MySQL

For HTTP API see my analysis above. For HTTP Data unwrapping and
re-wrapping is expensive and must be balanced against expected
benefits: what request characteristic would be
pinning/balancing/biasing on for Swift?

For AMQP and MySQL we'll be in tunnel mode anyway, so there is no
alternative but SSL to the backend machine and unwrap there.

> This is a bit overkill given our current state, but I think for now its
> important we terminate external SSL earlier on: See ML thread linked
> above for reasoning.

If I read this correctly, you're arguing yourself back to the "User ->
haproxy (VIP) -> SSL endpoint (on any control plane node) -> localhost
bound service." I mentioned ?

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
That depends on your security requirements.  If HAProxy is proxying requests to 
multiple servers and you terminate the SSL at HAProxy, then you will be sending 
the request unencrypted from one server to another. I am not at all opposed to 
adding the capabilities to configure HAProxy to terminate and even re-encrypt 
requests for those who have a different set of security requirements. Looks 
like I will need both the stunnel server and client.

Mark

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Wednesday, May 21, 2014 3:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options

On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
 wrote:
> We are considering the following connection chain:
>
>-> HAProxy   ->   stunnel ->OS services bound to 
> 127.0.0.1
>  Virtual IP server IP   localhost 
> 127.0.0.1
>  secure  SSL terminate unsecure

Interestingly, and separately, HAProxy can do SSL termination now, so we might 
want to consider just using HAProxy for that.

> In this chain none of the ports need to changed. One of the major issues I 
> have come across is the hard coding of the Keystone ports in the OpenStack 
> service's configuration files. With the above connection scheme none of the 
> ports need to change.

But we do need to have HAProxy not wildcard bind, as Greg points out, and to 
make OS services bind to 127.0.0.1 as Jan pointed out.

I suspect we need to put this through the specs process (which ops teams are 
starting to watch) to ensure we get enough input.

I'd love to see:
 - SSL by default
 - A setup we can document in the ops guide / HA openstack install guide - e.g 
we don't need to be doing it a third different way (or we can update the 
existing docs if what we converge on is better).
 - Only SSL enabled endpoints accessible from outside the machine (so python 
processes bound to localhost as a security feature).

Eventually we may need to scale traffic beyond one HAProxy, at which point 
we'll need to bring something altogether more sophisticated in - lets design 
that when we need it.
Sooner than that we're likely going to need to scale load beyond one control 
plane server at which point the HAProxy VIP either needs to be distributed (so 
active-active load receiving) or we need to go user -> haproxy (VIP) -> SSL 
endpoint (on any control plane node) -> localhost bound service.

HTH,
Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Gregory Haynes
On Thu, May 22, 2014, at 08:51 AM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis) wrote:
> 
> HAProxy SSL termination is not a viable option when HAProxy is used to
> proxy traffic between servers. If HAProxy terminates the SSL it will then
> proxy the traffic unencrypted to any other server across a network.
> However, since SSL termination and SSL re-encryption are now features of
> the current HAProxy development releases, I would vote to add these
> features in addition to stunnel.

Relevant ML thread from a few months ago:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031229.html

> 
> Mark 
> 
> From: Dmitriy Shulyak [dshul...@mirantis.com]
> Sent: Thursday, May 22, 2014 8:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options
> 
> Created spec https://review.openstack.org/#/c/94907/
> 
> I think it is WIP still, but would be nice to hear some comments/opinions
> 
> 
> On Thu, May 22, 2014 at 1:59 AM, Robert Collins
> mailto:robe...@robertcollins.net>> wrote:
> On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> mailto:mark.m.mil...@hp.com>> wrote:
> > We are considering the following connection chain:
> >
> >-> HAProxy   ->   stunnel ->OS services bound to 
> > 127.0.0.1
> >  Virtual IP server IP   localhost 
> > 127.0.0.1
> >  secure  SSL terminate unsecure
> 
> Interestingly, and separately, HAProxy can do SSL termination now, so
> we might want to consider just using HAProxy for that.

This would be a nice next step, but in the long term I can see users
wanting SSL termination and load balancing separated due to:
A) Different scaling requirements
B) Access control to machines with SSL certs

> 
> > In this chain none of the ports need to changed. One of the major issues I 
> > have come across is the hard coding of the Keystone ports in the OpenStack 
> > service's configuration files. With the above connection scheme none of the 
> > ports need to change.
> 
> But we do need to have HAProxy not wildcard bind, as Greg points out,
> and to make OS services bind to 127.0.0.1 as Jan pointed out.
> 
> I suspect we need to put this through the specs process (which ops
> teams are starting to watch) to ensure we get enough input.
> 
> I'd love to see:
>  - SSL by default
>  - A setup we can document in the ops guide / HA openstack install
> guide - e.g we don't need to be doing it a third different way (or we
> can update the existing docs if what we converge on is better).
>  - Only SSL enabled endpoints accessible from outside the machine (so
> python processes bound to localhost as a security feature).

+1

> 
> Eventually we may need to scale traffic beyond one HAProxy, at which
> point we'll need to bring something altogether more sophisticated in -
> lets design that when we need it.
> Sooner than that we're likely going to need to scale load beyond one
> control plane server at which point the HAProxy VIP either needs to be
> distributed (so active-active load receiving) or we need to go
> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
> localhost bound service.

Putting SSL termination behind HAProxy seems odd. Typically your load
balancer wants to be able to grok the traffic sent though it which is
not possible in this setup. For an environment where sending unencrypted
traffic across the internal work is not allowed I agree with Mark's
suggestion of re-encrypting for internal traffic, but IMO it should
still pass through the load balancer unencrypted. Basically:
User -> External SSL Terminate -> LB -> SSL encrypt -> control plane

This is a bit overkill given our current state, but I think for now its
important we terminate external SSL earlier on: See ML thread linked
above for reasoning.

> 
> HTH,
> Rob

Thanks,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)

HAProxy SSL termination is not a viable option when HAProxy is used to proxy 
traffic between servers. If HAProxy terminates the SSL it will then proxy the 
traffic unencrypted to any other server across a network. However, since SSL 
termination and SSL re-encryption are now features of the current HAProxy 
development releases, I would vote to add these features in addition to stunnel.

Mark 

From: Dmitriy Shulyak [dshul...@mirantis.com]
Sent: Thursday, May 22, 2014 8:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options

Created spec https://review.openstack.org/#/c/94907/

I think it is WIP still, but would be nice to hear some comments/opinions


On Thu, May 22, 2014 at 1:59 AM, Robert Collins 
mailto:robe...@robertcollins.net>> wrote:
On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
mailto:mark.m.mil...@hp.com>> wrote:
> We are considering the following connection chain:
>
>-> HAProxy   ->   stunnel ->OS services bound to 
> 127.0.0.1
>  Virtual IP server IP   localhost 
> 127.0.0.1
>  secure  SSL terminate unsecure

Interestingly, and separately, HAProxy can do SSL termination now, so
we might want to consider just using HAProxy for that.

> In this chain none of the ports need to changed. One of the major issues I 
> have come across is the hard coding of the Keystone ports in the OpenStack 
> service's configuration files. With the above connection scheme none of the 
> ports need to change.

But we do need to have HAProxy not wildcard bind, as Greg points out,
and to make OS services bind to 127.0.0.1 as Jan pointed out.

I suspect we need to put this through the specs process (which ops
teams are starting to watch) to ensure we get enough input.

I'd love to see:
 - SSL by default
 - A setup we can document in the ops guide / HA openstack install
guide - e.g we don't need to be doing it a third different way (or we
can update the existing docs if what we converge on is better).
 - Only SSL enabled endpoints accessible from outside the machine (so
python processes bound to localhost as a security feature).

Eventually we may need to scale traffic beyond one HAProxy, at which
point we'll need to bring something altogether more sophisticated in -
lets design that when we need it.
Sooner than that we're likely going to need to scale load beyond one
control plane server at which point the HAProxy VIP either needs to be
distributed (so active-active load receiving) or we need to go
user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
localhost bound service.

HTH,
Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Dmitriy Shulyak
Created spec https://review.openstack.org/#/c/94907/

I think it is WIP still, but would be nice to hear some comments/opinions


On Thu, May 22, 2014 at 1:59 AM, Robert Collins
wrote:

> On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
>  wrote:
> > We are considering the following connection chain:
> >
> >-> HAProxy   ->   stunnel ->OS services bound
> to 127.0.0.1
> >  Virtual IP server IP
> localhost 127.0.0.1
> >  secure  SSL terminate unsecure
>
> Interestingly, and separately, HAProxy can do SSL termination now, so
> we might want to consider just using HAProxy for that.
>
> > In this chain none of the ports need to changed. One of the major issues
> I have come across is the hard coding of the Keystone ports in the
> OpenStack service's configuration files. With the above connection scheme
> none of the ports need to change.
>
> But we do need to have HAProxy not wildcard bind, as Greg points out,
> and to make OS services bind to 127.0.0.1 as Jan pointed out.
>
> I suspect we need to put this through the specs process (which ops
> teams are starting to watch) to ensure we get enough input.
>
> I'd love to see:
>  - SSL by default
>  - A setup we can document in the ops guide / HA openstack install
> guide - e.g we don't need to be doing it a third different way (or we
> can update the existing docs if what we converge on is better).
>  - Only SSL enabled endpoints accessible from outside the machine (so
> python processes bound to localhost as a security feature).
>
> Eventually we may need to scale traffic beyond one HAProxy, at which
> point we'll need to bring something altogether more sophisticated in -
> lets design that when we need it.
> Sooner than that we're likely going to need to scale load beyond one
> control plane server at which point the HAProxy VIP either needs to be
> distributed (so active-active load receiving) or we need to go
> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
> localhost bound service.
>
> HTH,
> Rob
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-21 Thread Robert Collins
On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
 wrote:
> We are considering the following connection chain:
>
>-> HAProxy   ->   stunnel ->OS services bound to 
> 127.0.0.1
>  Virtual IP server IP   localhost 
> 127.0.0.1
>  secure  SSL terminate unsecure

Interestingly, and separately, HAProxy can do SSL termination now, so
we might want to consider just using HAProxy for that.

> In this chain none of the ports need to changed. One of the major issues I 
> have come across is the hard coding of the Keystone ports in the OpenStack 
> service's configuration files. With the above connection scheme none of the 
> ports need to change.

But we do need to have HAProxy not wildcard bind, as Greg points out,
and to make OS services bind to 127.0.0.1 as Jan pointed out.

I suspect we need to put this through the specs process (which ops
teams are starting to watch) to ensure we get enough input.

I'd love to see:
 - SSL by default
 - A setup we can document in the ops guide / HA openstack install
guide - e.g we don't need to be doing it a third different way (or we
can update the existing docs if what we converge on is better).
 - Only SSL enabled endpoints accessible from outside the machine (so
python processes bound to localhost as a security feature).

Eventually we may need to scale traffic beyond one HAProxy, at which
point we'll need to bring something altogether more sophisticated in -
lets design that when we need it.
Sooner than that we're likely going to need to scale load beyond one
control plane server at which point the HAProxy VIP either needs to be
distributed (so active-active load receiving) or we need to go
user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
localhost bound service.

HTH,
Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-17 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
We are considering the following connection chain:

   -> HAProxy   ->   stunnel ->OS services bound to 
127.0.0.1
 Virtual IP server IP   localhost 
127.0.0.1
 secure  SSL terminate unsecure

In this chain none of the ports need to changed. One of the major issues I have 
come across is the hard coding of the Keystone ports in the OpenStack service's 
configuration files. With the above connection scheme none of the ports need to 
change.

Mark

> -Original Message-
> From: Gregory Haynes [mailto:g...@greghaynes.net]
> Sent: Friday, May 16, 2014 9:25 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options
> 
> On Fri, May 16, 2014, at 02:52 AM, Jan Provazník wrote:
> > On 05/12/2014 10:35 AM, Dmitriy Shulyak wrote:
> > > Adding haproxy (or keepalived with lvs for load balancing) will
> > > require binding haproxy and openstack services on different sockets.
> > > Afaik there is 3 approaches that tripleo could go with.
> > >
> > > Consider configuration with 2 controllers:
> > >
> > > haproxy: nodes: -   name: controller0 ip: 192.0.2.20 -   name:
> > > controller1 ip: 192.0.2.21
> > >
> > > 1. Binding haproxy on virtual ip and standard ports
> > >
> > > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > > ip) port: 80 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> > > (virtual ip) proxy_port: 9696 port: 9696
> > >
> > > Pros: - No additional modifications in elements is required
> >
> > Actually openstack services elements have to be updated to bind to
> > local address only.
> 
> Another big issue with this set up is dealing with changes to interfaces on 
> the
> box. We would have to detect when a new interface is added, and have the
> app-specific logic to make the application aware of the new interface
> (typically just editing the app's config file isn't enough for this). Note 
> that this
> is not an issue when an app binds to 0.0.0.0.
> 
> >
> > > HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
> > > was dropped?
> >
> > IIRC the major reason was that having 2 services on same port (but
> > different interface) would be too confusing for anyone who is not
> > aware of this fact.
> >
> > >
> > > 2. Haproxy listening on standard ports, services on non-standard
> > >
> > > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > > ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> > >  (virtual ip) proxy_port: 9696 port: 9797
> > >
> > > Pros: - No changes will be required to init-keystone part of
> > > workflow - Proxied services will be accessible on accustomed ports
> >
> > Bear in mind that we already use not-standard SSL ports for public
> > endpoints. Also extra work will be required for setting up stunnels
> > (element openstack-ssl).
> >
> > > - No changes to configs where services ports need to be hardcoded,
> > > for example in nova.conf https://review.openstack.org/#/c/92550/
> > >
> > > Cons: - Config files should be changed to add possibility of ports
> > > configuration
> >
> > Another cons is also updating selinux and firewall rules for each node.
> >
> 
> Also Iptables rules. On the plus side, these ports should *really* be
> configurable anyway.
> 
> > >
> > > 3. haproxy on non-standard ports, with services on standard
> > >
> > > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > > ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> > >  (virtual ip) proxy_port: 9797 port: 9696
> > >
> > > Notice that i changed only port for neutron, main endpoint for
> > > horizon should listen on default http or https ports.
> > >
> >
> > Agree that having 2 service ports switched in other way than other is
> > sub-optimal.
> 
> +1 on this solution being the least preferred - we shouldn't be pushing
> the extra configuration work onto our clients.
> 
> >
> > > Basicly it is opposite to 2 approach. I would prefer to go with 2,
> > > cause it requires only minor refactoring.
> > >
> > > Thoughts?
> > >
> >
> > Options 2 and 3 seem quite equal based on pros vs cons. Maybe we
> > should reconsider option 1?
> >
> > Jan
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-16 Thread Gregory Haynes
On Fri, May 16, 2014, at 02:52 AM, Jan Provazník wrote:
> On 05/12/2014 10:35 AM, Dmitriy Shulyak wrote:
> > Adding haproxy (or keepalived with lvs for load balancing) will
> > require binding haproxy and openstack services on different sockets.
> > Afaik there is 3 approaches that tripleo could go with.
> >
> > Consider configuration with 2 controllers:
> >
> > haproxy: nodes: -   name: controller0 ip: 192.0.2.20 -   name:
> > controller1 ip: 192.0.2.21
> >
> > 1. Binding haproxy on virtual ip and standard ports
> >
> > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > ip) port: 80 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> > (virtual ip) proxy_port: 9696 port: 9696
> >
> > Pros: - No additional modifications in elements is required
> 
> Actually openstack services elements have to be updated to bind to local
> address only.

Another big issue with this set up is dealing with changes to interfaces
on the box. We would have to detect when a new interface is added, and
have the app-specific logic to make the application aware of the new
interface (typically just editing the app's config file isn't enough for
this). Note that this is not an issue when an app binds to 0.0.0.0.

> 
> > HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
> >  was dropped?
> 
> IIRC the major reason was that having 2 services on same port (but
> different interface) would be too confusing for anyone who is not aware
> of this fact.
> 
> >
> > 2. Haproxy listening on standard ports, services on non-standard
> >
> > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> >  (virtual ip) proxy_port: 9696 port: 9797
> >
> > Pros: - No changes will be required to init-keystone part of
> > workflow - Proxied services will be accessible on accustomed ports
> 
> Bear in mind that we already use not-standard SSL ports for public
> endpoints. Also extra work will be required for setting up stunnels
> (element openstack-ssl).
> 
> > - No changes to configs where services ports need to be hardcoded,
> > for example in nova.conf https://review.openstack.org/#/c/92550/
> >
> > Cons: - Config files should be changed to add possibility of ports
> > configuration
> 
> Another cons is also updating selinux and firewall rules for each node.
> 

Also Iptables rules. On the plus side, these ports should *really* be
configurable anyway.

> >
> > 3. haproxy on non-standard ports, with services on standard
> >
> > haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
> > ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
> >  (virtual ip) proxy_port: 9797 port: 9696
> >
> > Notice that i changed only port for neutron, main endpoint for
> > horizon should listen on default http or https ports.
> >
> 
> Agree that having 2 service ports switched in other way than other is 
> sub-optimal.

+1 on this solution being the least preferred - we shouldn't be pushing
the extra configuration work onto our clients.

> 
> > Basicly it is opposite to 2 approach. I would prefer to go with 2,
> > cause it requires only minor refactoring.
> >
> > Thoughts?
> >
> 
> Options 2 and 3 seem quite equal based on pros vs cons. Maybe we should 
> reconsider option 1?
> 
> Jan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-16 Thread Dmitriy Shulyak
>
>
>> HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
>>  was dropped?
>>
>
> IIRC the major reason was that having 2 services on same port (but
> different interface) would be too confusing for anyone who is not aware
> of this fact.
>
>
 Major part of documentation for haproxy with vip setup is done with
duplicated ports.
>From my experience lb solutions have been made with load balancer sitting
on VIRTUAL_IP:STANDART_PORT and/or PUBLIC_VIRTUAL_IP:STANDART_PORT.

Maybe this is not so big issue? It will be much easier to start with such
deployment configuration

Dmitry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-16 Thread Jan Provazník

On 05/12/2014 10:35 AM, Dmitriy Shulyak wrote:

Adding haproxy (or keepalived with lvs for load balancing) will
require binding haproxy and openstack services on different sockets.
Afaik there is 3 approaches that tripleo could go with.

Consider configuration with 2 controllers:

haproxy: nodes: -   name: controller0 ip: 192.0.2.20 -   name:
controller1 ip: 192.0.2.21

1. Binding haproxy on virtual ip and standard ports

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 80 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
(virtual ip) proxy_port: 9696 port: 9696

Pros: - No additional modifications in elements is required


Actually openstack services elements have to be updated to bind to local
address only.


HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
 was dropped?


IIRC the major reason was that having 2 services on same port (but
different interface) would be too confusing for anyone who is not aware
of this fact.



2. Haproxy listening on standard ports, services on non-standard

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
 (virtual ip) proxy_port: 9696 port: 9797

Pros: - No changes will be required to init-keystone part of
workflow - Proxied services will be accessible on accustomed ports


Bear in mind that we already use not-standard SSL ports for public
endpoints. Also extra work will be required for setting up stunnels
(element openstack-ssl).


- No changes to configs where services ports need to be hardcoded,
for example in nova.conf https://review.openstack.org/#/c/92550/

Cons: - Config files should be changed to add possibility of ports
configuration


Another cons is also updating selinux and firewall rules for each node.



3. haproxy on non-standard ports, with services on standard

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
 (virtual ip) proxy_port: 9797 port: 9696

Notice that i changed only port for neutron, main endpoint for
horizon should listen on default http or https ports.



Agree that having 2 service ports switched in other way than other is 
sub-optimal.



Basicly it is opposite to 2 approach. I would prefer to go with 2,
cause it requires only minor refactoring.

Thoughts?



Options 2 and 3 seem quite equal based on pros vs cons. Maybe we should 
reconsider option 1?


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Haproxy configuration options

2014-05-12 Thread Dmitriy Shulyak
Adding haproxy (or keepalived with lvs for load balancing) will require
binding haproxy and openstack services on different sockets.
Afaik there is 3 approaches that tripleo could go with.

Consider configuration with 2 controllers:

haproxy:
nodes:
-   name: controller0
ip: 192.0.2.20
-   name: controller1
ip: 192.0.2.21

1. Binding haproxy on virtual ip and standard ports

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 80
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9696
port: 9696

Pros:
- No additional modifications in elements is required
HA-Proxy version 1.4.24 2013/06/17
What was the reason this approach was dropped?

2. Haproxy listening on standard ports, services on non-standard

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 8080
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9696
port: 9797

Pros:
- No changes will be required to init-keystone part of workflow
- Proxied services will be accessible on accustomed ports
- No changes to configs where services ports need to be hardcoded, for
example in nova.conf https://review.openstack.org/#/c/92550/

Cons:
- Config files should be changed to add possibility of ports configuration

3. haproxy on non-standard ports, with services on standard

haproxy:
services:
-   name: horizon
proxy_ip: 192.0.2.22 (virtual ip)
port: 8080
proxy_port: 80
-   name: neutron
proxy_ip: 192.0.2.22 (virtual ip)
proxy_port: 9797
port: 9696

Notice that i changed only port for neutron, main endpoint for horizon
should listen on default http or https ports.

Basicly it is opposite to 2 approach. I would prefer to go with 2, cause it
requires only minor refactoring.

Thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev