Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-15 Thread Stephen Balukoff
Hi Brandon!

My responses in-line:

On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 IN IRC the topic came up about supporting many-to-many load balancers to
 amphorae.  I believe a consensus was made that allowing only one-to-many
 load balancers to amphorae would be the first step forward, and
 re-evaluate later, since colocation and apolocation will need to work
 (which brings up another topic, defining what it actually means to be
 colocated: On the same amphorae, on the same amphorae host, on the same
 cell/cluster, on the same data center/availability zone. That should be
 something we discuss later, but not right now).

 I am fine with that decisions, but Doug brought up a good point that
 this could very well just be a decision for the controller driver and
 Octavia shouldn't mandate this for all drivers.  So I think we need to
 clearly define what decisions are the responsibility of the controller
 driver versus what decisions are mandated by Octavia's construct.


In my mind, the only thing dictated by the controller to the driver here
would be things related to colocation / apolocation. So in order to fully
have that discussion here, we first need to have a conversation about what
these things actually mean in the context of Octavia and/or get specific
requirements from operators here.  The reference driver (ie. haproxy
amphora) will of course have to follow a given behavior here as well, and
there's the possibility that even if we don't dictate behavior in one way
or another, operators and users may come to expect the behavior of the
reference driver here to become the defacto requirements.



 Items I can come up with off the top of my head:

 1) LB:Amphora - M:N vs 1:N


My opinion:  For simplicity, first revision should be 1:N, but leave open
the possibility of M:N at a later date, depending on what people require.
That is to say, we'll only do 1:N at first so we can have simpler
scheduling algorithms for now, but let's not paint ourselves into a corner
in other portions of the code by assuming there will only ever be one LB on
an amphora.


 2) VIPs:LB - M:N vs 1:N


So, I would revise that to be N:1 or 1:1. I don't think we'll ever want to
support a case where multiple LBs share the same VIP. (Multiple amphorae
per VIP, yes... but not multiple LBs per VIP. LBs are logical constructs
that also provide for good separation of concerns, particularly around
security.)

The most solid use case for N:1 that I've heard is the IPv6 use case, where
a user wants to expose the exact same services over IPv4 and IPv6, and
therefore it makes sense to be able to have multiple VIPs per load
balancer. (In fact, I'm not aware of other use cases here that hold any
water.) Having said this, we're quite a ways from IPv6 being ready for use
in the underlying networking infrastructure.  So...  again, I would say
let's go with 1:1 for now to make things simple for scheduling, but not
paint ourselves into a corner here architecturally in other areas of the
code by assuming there will only ever be one VIP per LB.

3) Pool:HMs - 1:N vs 1:1


Does anyone have a solid use case for having more than one health monitor
per pool?  (And how do you resolve conflicts in health monitor check
results?)  I can't think of one, so 1:1 has my vote here.




 I'm sure there are others.  I'm sure each one will need to be evaluated
 on a case-by-case basis.  We will be walking a fine line between
 flexibility and complexity.  We just need to define how far over that
 line and in which direction we are willing to go.

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-15 Thread Adam Harwell
I pretty much completely agree with Stephen here, other than believing we 
should do N:1 on VIPs (item 2 in your list) from the start. We know we're doing 
IPv6 this way, and I'd rather not put off support for it at the 
controller/driver/whatever layer just because the underlying infrastructure 
isn't there yet. I'd like to be 100% ready when it is, not wait until the 
network is ready and then do a refactor.

--Adam

https://keybase.io/rm_you


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, September 15, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

Hi Brandon!

My responses in-line:

On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
IN IRC the topic came up about supporting many-to-many load balancers to
amphorae.  I believe a consensus was made that allowing only one-to-many
load balancers to amphorae would be the first step forward, and
re-evaluate later, since colocation and apolocation will need to work
(which brings up another topic, defining what it actually means to be
colocated: On the same amphorae, on the same amphorae host, on the same
cell/cluster, on the same data center/availability zone. That should be
something we discuss later, but not right now).

I am fine with that decisions, but Doug brought up a good point that
this could very well just be a decision for the controller driver and
Octavia shouldn't mandate this for all drivers.  So I think we need to
clearly define what decisions are the responsibility of the controller
driver versus what decisions are mandated by Octavia's construct.

In my mind, the only thing dictated by the controller to the driver here would 
be things related to colocation / apolocation. So in order to fully have that 
discussion here, we first need to have a conversation about what these things 
actually mean in the context of Octavia and/or get specific requirements from 
operators here.  The reference driver (ie. haproxy amphora) will of course have 
to follow a given behavior here as well, and there's the possibility that even 
if we don't dictate behavior in one way or another, operators and users may 
come to expect the behavior of the reference driver here to become the defacto 
requirements.


Items I can come up with off the top of my head:

1) LB:Amphora - M:N vs 1:N

My opinion:  For simplicity, first revision should be 1:N, but leave open the 
possibility of M:N at a later date, depending on what people require. That is 
to say, we'll only do 1:N at first so we can have simpler scheduling algorithms 
for now, but let's not paint ourselves into a corner in other portions of the 
code by assuming there will only ever be one LB on an amphora.

2) VIPs:LB - M:N vs 1:N

So, I would revise that to be N:1 or 1:1. I don't think we'll ever want to 
support a case where multiple LBs share the same VIP. (Multiple amphorae per 
VIP, yes... but not multiple LBs per VIP. LBs are logical constructs that also 
provide for good separation of concerns, particularly around security.)

The most solid use case for N:1 that I've heard is the IPv6 use case, where a 
user wants to expose the exact same services over IPv4 and IPv6, and therefore 
it makes sense to be able to have multiple VIPs per load balancer. (In fact, 
I'm not aware of other use cases here that hold any water.) Having said this, 
we're quite a ways from IPv6 being ready for use in the underlying networking 
infrastructure.  So...  again, I would say let's go with 1:1 for now to make 
things simple for scheduling, but not paint ourselves into a corner here 
architecturally in other areas of the code by assuming there will only ever be 
one VIP per LB.

3) Pool:HMs - 1:N vs 1:1

Does anyone have a solid use case for having more than one health monitor per 
pool?  (And how do you resolve conflicts in health monitor check results?)  I 
can't think of one, so 1:1 has my vote here.



I'm sure there are others.  I'm sure each one will need to be evaluated
on a case-by-case basis.  We will be walking a fine line between
flexibility and complexity.  We just need to define how far over that
line and in which direction we are willing to go.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-15 Thread Brandon Logan
Hi Stephen,

Same drill

On Mon, 2014-09-15 at 13:33 -0700, Stephen Balukoff wrote:
 Hi Brandon!
 
 
 My responses in-line:
 
 On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 IN IRC the topic came up about supporting many-to-many load
 balancers to
 amphorae.  I believe a consensus was made that allowing only
 one-to-many
 load balancers to amphorae would be the first step forward,
 and
 re-evaluate later, since colocation and apolocation will need
 to work
 (which brings up another topic, defining what it actually
 means to be
 colocated: On the same amphorae, on the same amphorae host, on
 the same
 cell/cluster, on the same data center/availability zone. That
 should be
 something we discuss later, but not right now).
 
 I am fine with that decisions, but Doug brought up a good
 point that
 this could very well just be a decision for the controller
 driver and
 Octavia shouldn't mandate this for all drivers.  So I think we
 need to
 clearly define what decisions are the responsibility of the
 controller
 driver versus what decisions are mandated by Octavia's
 construct.
 
 
 In my mind, the only thing dictated by the controller to the driver
 here would be things related to colocation / apolocation. So in order
 to fully have that discussion here, we first need to have a
 conversation about what these things actually mean in the context of
 Octavia and/or get specific requirements from operators here.  The
 reference driver (ie. haproxy amphora) will of course have to follow a
 given behavior here as well, and there's the possibility that even if
 we don't dictate behavior in one way or another, operators and users
 may come to expect the behavior of the reference driver here to become
 the defacto requirements.

So since with HA we will want apolocation, are you saying the controller
should dictate that every driver create a load balancer's amphorae on
different hosts?  I'm not sure the controller could enforce this, other
than code reviews, but I might be a short-sighted here.

  
 
 Items I can come up with off the top of my head:
 
 1) LB:Amphora - M:N vs 1:N
 
 
 My opinion:  For simplicity, first revision should be 1:N, but leave
 open the possibility of M:N at a later date, depending on what people
 require. That is to say, we'll only do 1:N at first so we can have
 simpler scheduling algorithms for now, but let's not paint ourselves
 into a corner in other portions of the code by assuming there will
 only ever be one LB on an amphora.

This is reasonable.  Of course, this brings up the question on whether
we should keep the table structure as is with a M:N relationship.  My
opinion is we start with the 1:N table structure.  My reasons are in
response to your comment on this review:

https://review.openstack.org/#/c/116718/

  
 2) VIPs:LB - M:N vs 1:N
 
 
 So, I would revise that to be N:1 or 1:1. I don't think we'll ever
 want to support a case where multiple LBs share the same VIP.
 (Multiple amphorae per VIP, yes... but not multiple LBs per VIP. LBs
 are logical constructs that also provide for good separation of
 concerns, particularly around security.)

Yeah sorry about that, brain fart.  Unless we want shareable VIPs!?
anyone? anyone?
 
 
 The most solid use case for N:1 that I've heard is the IPv6 use case,
 where a user wants to expose the exact same services over IPv4 and
 IPv6, and therefore it makes sense to be able to have multiple VIPs
 per load balancer. (In fact, I'm not aware of other use cases here
 that hold any water.) Having said this, we're quite a ways from IPv6
 being ready for use in the underlying networking infrastructure.
 So...  again, I would say let's go with 1:1 for now to make things
 simple for scheduling, but not paint ourselves into a corner here
 architecturally in other areas of the code by assuming there will only
 ever be one VIP per LB.

Yeah N:1 every comes up as something we should and can do, we'll revisit
it then.
 
 
 3) Pool:HMs - 1:N vs 1:1
 
 
 Does anyone have a solid use case for having more than one health
 monitor per pool?  (And how do you resolve conflicts in health monitor
 check results?)  I can't think of one, so 1:1 has my vote here.

I don't know of any strong ones, but it is allowed by some vendors.
 
 
  
 
 I'm sure there are others.  I'm sure each one will need to be
 evaluated
 on a case-by-case basis.  We will be walking a fine line
 between
 flexibility and complexity.  We just need to define how far
 over that
 line and in which direction we are willing to go.
 
 Thanks,
 Brandon
 ___