Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
Hi,

My 2 cents for the multiple listeners per load balancer discussion: We have 
customers who like to have a listener on port 80 and one on port 443 on the 
same VIP (we had to patch libra to allow two listeners in one single haproxy) 
- so having that would be great.

I like the proposed status :-)

Thanks,
German

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Sunday, August 17, 2014 8:57 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Oh hello again!

You know the drill!

On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
 Hi Brandon,
 
 
 Responses in-line:
 
 On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
 Comments in-line
 
 On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
  Hi folks,
 
 
  I'm OK with going with no shareable child entities
 (Listeners, Pools,
  Members, TLS-related objects, L7-related objects, etc.).
 This will
  simplify a lot of things (like status reporting), and we can
 probably
  safely work under the assumption that any user who has a use
 case in
  which a shared entity is useful is probably also technically
 savvy
  enough to not only be able to manage consistency problems
 themselves,
  but is also likely to want to have that level of control.
 
 
  Also, an haproxy instance should map to a single listener.
 This makes
  management of the configuration template simpler and the
 behavior of a
  single haproxy instance more predictable. Also, when it
 comes to
  configuration updates (as will happen, say, when a new
 member gets
  added to a pool), it's less risky and error prone to restart
 the
  haproxy instance for just the affected listener, and not for
 all
  listeners on the Octavia VM. The only down-sides I see are
 that we
  consume slightly more memory, we don't have the advantage of
 a shared
  SSL session cache (probably doesn't matter for 99.99% of
 sites using
  TLS anyway), and certain types of persistence wouldn't carry
 over
  between different listeners if they're implemented poorly by
 the
  user. :/  (In other words, negligible down-sides to this.)
 
 
 This is fine by me for now, but I think this might be
 something we can
 revisit later after we have the advantage of hindsight.  Maybe
 a
 configurable option.
 
 
 Sounds good, as long as we agree on a path forward. In the mean time, 
 is there anything I'm missing which would be a significant advantage 
 of having multiple Listeners configured in a single haproxy instance?
 (Or rather, where a single haproxy instance maps to a loadbalancer
 object?)

No particular reason as of now.  Just feel like that could be something that 
could hinder a particular feature or even performance in the future.  It's not 
rooted in any fact or past experience.

  
 I have no problem with this. However, one thing I often do
 think about
 is that it's not really ever going to be load balancing
 anything with
 just a load balancer and listener.  It has to have a pool and
 members as
 well.  So having ACTIVE on the load balancer and listener, and
 still not
 really load balancing anything is a bit odd.  Which is why I'm
 in favor
 of only doing creates by specifying the entire tree in one
 call
 (loadbalancer-listeners-pool-members).  Feel free to
 disagree with me
 on this because I know this not something everyone likes.  I'm
 sure I am
 forgetting something that makes this a hard thing to do.  But
 if this
 were the case, then I think only having the provisioning
 status on the
 load balancer makes sense again.  The reason I am advocating
 for the
 provisioning status on the load balancer is because it still
 simpler,
 and only one place to look to see if everything were
 successful or if
 there was an issue.
 
 
 Actually, there is one case where it makes sense to have an ACTIVE 
 Listener when that listener has no pools or members:  Probably the 2nd 
 or 3rd most common type of load balancing service we deploy is just 
 an HTTP listener on port 80 that redirects all requests to the HTTPS 
 listener on port 443. While this can be done using a (small) pool of 
 back-end servers responding to the port 80 requests, there's really no 
 point in not having the haproxy instance do this redirect directly for 
 sites that want all access to happen over SSL

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Brandon Logan
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,
 
 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.
 
 I like the proposed status :-)
 
 Thanks,
 German
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Sunday, August 17, 2014 8:57 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
 Oh hello again!
 
 You know the drill!
 
 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
  
  
  Responses in-line:
  
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan 
  brandon.lo...@rackspace.com wrote:
  Comments in-line
  
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only down-sides I see are
  that we
   consume slightly more memory, we don't have the advantage of
  a shared
   SSL session cache (probably doesn't matter for 99.99% of
  sites using
   TLS anyway), and certain types of persistence wouldn't carry
  over
   between different listeners if they're implemented poorly by
  the
   user. :/  (In other words, negligible down-sides to this.)
  
  
  This is fine by me for now, but I think this might be
  something we can
  revisit later after we have the advantage of hindsight.  Maybe
  a
  configurable option.
  
  
  Sounds good, as long as we agree on a path forward. In the mean time, 
  is there anything I'm missing which would be a significant advantage 
  of having multiple Listeners configured in a single haproxy instance?
  (Or rather, where a single haproxy instance maps to a loadbalancer
  object?)
 
 No particular reason as of now.  Just feel like that could be something that 
 could hinder a particular feature or even performance in the future.  It's 
 not rooted in any fact or past experience.
 
   
  I have no problem with this. However, one thing I often do
  think about
  is that it's not really ever going to be load balancing
  anything with
  just a load balancer and listener.  It has to have a pool and
  members as
  well.  So having ACTIVE on the load balancer and listener, and
  still not
  really load balancing anything is a bit odd.  Which is why I'm
  in favor
  of only doing creates by specifying the entire tree in one
  call
  (loadbalancer-listeners-pool-members).  Feel free to
  disagree with me
  on this because I know this not something everyone likes.  I'm
  sure I am
  forgetting something that makes this a hard thing to do.  But
  if this
  were the case, then I think only having the provisioning
  status on the
  load balancer makes sense again.  The reason I am advocating
  for the
  provisioning status on the load balancer is because it still
  simpler,
  and only one place to look to see if everything were
  successful or if
  there was an issue.
  
  
  Actually, there is one case where it makes sense to have an ACTIVE

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Stephen Balukoff
Yes, I'm advocating keeping each listener in a separate haproxy
configuration (and separate running instance). This includes the example I
mentioned: One that listens on port 80 for HTTP requests and redirects
everything to the HTTPS listener on port 443.  (The port 80 listener is a
simple configuration with no pool or members, and it doesn't take much to
have it run on the same host as the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
Though I doubt it would have a significant impact on the operational model
where each listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener
would have the exact same back-end configuration? If so, then what we're
discussing here with no sharing of child entities, would mean that the
customer has to set up and manage these duplicate pools and members. If
that's not acceptable, now is the time to register that opinion, eh!

Stephen


On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 Hi German,
 I don't think it is a requirement that those two frontend sections (or
 listen sections) have to live in the same config.  I thought if they
 were listening on the same IP but different ports it could be in two
 different haproxy instances.  I could be wrong though.

 Thanks,
 Brandon

 On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
  Hi,
 
  My 2 cents for the multiple listeners per load balancer discussion: We
 have customers who like to have a listener on port 80 and one on port 443
 on the same VIP (we had to patch libra to allow two listeners in one
 single haproxy) - so having that would be great.
 
  I like the proposed status :-)
 
  Thanks,
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Sunday, August 17, 2014 8:57 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
  Oh hello again!
 
  You know the drill!
 
  On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
   Hi Brandon,
  
  
   Responses in-line:
  
   On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Comments in-line
  
   On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
Hi folks,
   
   
I'm OK with going with no shareable child entities
   (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.).
   This will
simplify a lot of things (like status reporting), and we can
   probably
safely work under the assumption that any user who has a use
   case in
which a shared entity is useful is probably also technically
   savvy
enough to not only be able to manage consistency problems
   themselves,
but is also likely to want to have that level of control.
   
   
Also, an haproxy instance should map to a single listener.
   This makes
management of the configuration template simpler and the
   behavior of a
single haproxy instance more predictable. Also, when it
   comes to
configuration updates (as will happen, say, when a new
   member gets
added to a pool), it's less risky and error prone to restart
   the
haproxy instance for just the affected listener, and not for
   all
listeners on the Octavia VM. The only down-sides I see are
   that we
consume slightly more memory, we don't have the advantage of
   a shared
SSL session cache (probably doesn't matter for 99.99% of
   sites using
TLS anyway), and certain types of persistence wouldn't carry
   over
between different listeners if they're implemented poorly by
   the
user. :/  (In other words, negligible down-sides to this.)
  
  
   This is fine by me for now, but I think this might be
   something we can
   revisit later after we have the advantage of hindsight.  Maybe
   a
   configurable option.
  
  
   Sounds good, as long as we agree on a path forward. In the mean time,
   is there anything I'm missing which would be a significant advantage
   of having multiple Listeners configured in a single haproxy instance?
   (Or rather, where a single haproxy instance maps to a loadbalancer
   object?)
 
  No particular reason as of now.  Just feel like that could be something
 that could hinder a particular feature or even performance in the future.
 It's not rooted in any fact or past experience.
 
  
   I have no problem with this. However, one thing I often do
   think about
   is that it's not really ever going to be load

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
Hi Steven,

In my example we don’t share anything except the VIP ☺ So my motivation is if 
we can have two listeners share the same VIP. Hope that makes sense.

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Yes, I'm advocating keeping each listener in a separate haproxy configuration 
(and separate running instance). This includes the example I mentioned: One 
that listens on port 80 for HTTP requests and redirects everything to the HTTPS 
listener on port 443.  (The port 80 listener is a simple configuration with no 
pool or members, and it doesn't take much to have it run on the same host as 
the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet. Though 
I doubt it would have a significant impact on the operational model where each 
listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener would 
have the exact same back-end configuration? If so, then what we're discussing 
here with no sharing of child entities, would mean that the customer has to set 
up and manage these duplicate pools and members. If that's not acceptable, now 
is the time to register that opinion, eh!

Stephen

On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,

 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.

 I like the proposed status :-)

 Thanks,
 German

 -Original Message-
 From: Brandon Logan 
 [mailto:brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com]
 Sent: Sunday, August 17, 2014 8:57 PM
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

 Oh hello again!

 You know the drill!

 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
 
 
  Responses in-line:
 
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
  brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
  Comments in-line
 
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only down-sides I see are
  that we
   consume slightly more memory, we don't have the advantage of
  a shared
   SSL session cache (probably doesn't matter for 99.99% of
  sites using
   TLS anyway), and certain types of persistence wouldn't carry
  over
   between different listeners if they're implemented poorly by
  the
   user. :/  (In other words, negligible down-sides to this.)
 
 
  This is fine by me for now, but I think this might be
  something we can
  revisit later after we have the advantage of hindsight.  Maybe
  a
  configurable option.
 
 
  Sounds good, as long as we agree on a path forward. In the mean time,
  is there anything I'm missing which would be a significant advantage
  of having multiple Listeners configured

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Stephen Balukoff
German--

By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the
Neutron LBaaS object model (as we've discussed in the past)?  That is to
say, is this thingy a parent object to the Listener in the hierarchy? If
so, then what we're describing definitely accommodates that.

(And yes, we commonly see deployments with listeners on port 80 and port
443 on the same virtual IP address.)

Stephen


On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German 
german.eichber...@hp.com wrote:

  Hi Steven,



 In my example we don’t share anything except the VIP J So my motivation
 is if we can have two listeners share the same VIP. Hope that makes sense.



 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Monday, August 18, 2014 1:39 PM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure



 Yes, I'm advocating keeping each listener in a separate haproxy
 configuration (and separate running instance). This includes the example I
 mentioned: One that listens on port 80 for HTTP requests and redirects
 everything to the HTTPS listener on port 443.  (The port 80 listener is a
 simple configuration with no pool or members, and it doesn't take much to
 have it run on the same host as the port 443 listener.)



 I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
 Though I doubt it would have a significant impact on the operational model
 where each listener is a separate haproxy configuration and instance.



 German: Are you saying that the port 80 listener and port 443 listener
 would have the exact same back-end configuration? If so, then what we're
 discussing here with no sharing of child entities, would mean that the
 customer has to set up and manage these duplicate pools and members. If
 that's not acceptable, now is the time to register that opinion, eh!



 Stephen



 On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

 Hi German,
 I don't think it is a requirement that those two frontend sections (or
 listen sections) have to live in the same config.  I thought if they
 were listening on the same IP but different ports it could be in two
 different haproxy instances.  I could be wrong though.

 Thanks,
 Brandon


 On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
  Hi,
 
  My 2 cents for the multiple listeners per load balancer discussion: We
 have customers who like to have a listener on port 80 and one on port 443
 on the same VIP (we had to patch libra to allow two listeners in one
 single haproxy) - so having that would be great.
 
  I like the proposed status :-)
 
  Thanks,
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Sunday, August 17, 2014 8:57 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
  Oh hello again!
 
  You know the drill!
 
  On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
   Hi Brandon,
  
  
   Responses in-line:
  
   On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Comments in-line
  
   On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
Hi folks,
   
   
I'm OK with going with no shareable child entities
   (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.).
   This will
simplify a lot of things (like status reporting), and we can
   probably
safely work under the assumption that any user who has a use
   case in
which a shared entity is useful is probably also technically
   savvy
enough to not only be able to manage consistency problems
   themselves,
but is also likely to want to have that level of control.
   
   
Also, an haproxy instance should map to a single listener.
   This makes
management of the configuration template simpler and the
   behavior of a
single haproxy instance more predictable. Also, when it
   comes to
configuration updates (as will happen, say, when a new
   member gets
added to a pool), it's less risky and error prone to restart
   the
haproxy instance for just the affected listener, and not for
   all
listeners on the Octavia VM. The only down-sides I see are
   that we
consume slightly more memory, we don't have the advantage of
   a shared
SSL session cache (probably doesn't matter for 99.99% of
   sites using
TLS anyway), and certain types of persistence wouldn't carry
   over
between different listeners if they're implemented poorly

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
No, I mean with VIP the original meaning more akin to a Floating IP…

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 2:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

German--

By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the 
Neutron LBaaS object model (as we've discussed in the past)?  That is to say, 
is this thingy a parent object to the Listener in the hierarchy? If so, then 
what we're describing definitely accommodates that.

(And yes, we commonly see deployments with listeners on port 80 and port 443 on 
the same virtual IP address.)

Stephen

On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Steven,

In my example we don’t share anything except the VIP ☺ So my motivation is if 
we can have two listeners share the same VIP. Hope that makes sense.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Yes, I'm advocating keeping each listener in a separate haproxy configuration 
(and separate running instance). This includes the example I mentioned: One 
that listens on port 80 for HTTP requests and redirects everything to the HTTPS 
listener on port 443.  (The port 80 listener is a simple configuration with no 
pool or members, and it doesn't take much to have it run on the same host as 
the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet. Though 
I doubt it would have a significant impact on the operational model where each 
listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener would 
have the exact same back-end configuration? If so, then what we're discussing 
here with no sharing of child entities, would mean that the customer has to set 
up and manage these duplicate pools and members. If that's not acceptable, now 
is the time to register that opinion, eh!

Stephen

On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,

 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.

 I like the proposed status :-)

 Thanks,
 German

 -Original Message-
 From: Brandon Logan 
 [mailto:brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com]
 Sent: Sunday, August 17, 2014 8:57 PM
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

 Oh hello again!

 You know the drill!

 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
 
 
  Responses in-line:
 
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
  brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
  Comments in-line
 
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Stephen Balukoff
Hi German,


On Mon, Aug 18, 2014 at 3:10 PM, Eichberger, German 
german.eichber...@hp.com wrote:

  No, I mean with VIP the original meaning more akin to a Floating IP…


I think that's what I was describing below. But in any case, yes-- the
model we are describing should accommodate that.





 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Monday, August 18, 2014 2:43 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure



 German--



 By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the
 Neutron LBaaS object model (as we've discussed in the past)?  That is to
 say, is this thingy a parent object to the Listener in the hierarchy? If
 so, then what we're describing definitely accommodates that.



 (And yes, we commonly see deployments with listeners on port 80 and port
 443 on the same virtual IP address.)



 Stephen



 On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German 
 german.eichber...@hp.com wrote:

 Hi Steven,



 In my example we don’t share anything except the VIP J So my motivation
 is if we can have two listeners share the same VIP. Hope that makes sense.



 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Monday, August 18, 2014 1:39 PM
 *To:* OpenStack Development Mailing List (not for usage questions)


 *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure



 Yes, I'm advocating keeping each listener in a separate haproxy
 configuration (and separate running instance). This includes the example I
 mentioned: One that listens on port 80 for HTTP requests and redirects
 everything to the HTTPS listener on port 443.  (The port 80 listener is a
 simple configuration with no pool or members, and it doesn't take much to
 have it run on the same host as the port 443 listener.)



 I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
 Though I doubt it would have a significant impact on the operational model
 where each listener is a separate haproxy configuration and instance.



 German: Are you saying that the port 80 listener and port 443 listener
 would have the exact same back-end configuration? If so, then what we're
 discussing here with no sharing of child entities, would mean that the
 customer has to set up and manage these duplicate pools and members. If
 that's not acceptable, now is the time to register that opinion, eh!



 Stephen



 On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

 Hi German,
 I don't think it is a requirement that those two frontend sections (or
 listen sections) have to live in the same config.  I thought if they
 were listening on the same IP but different ports it could be in two
 different haproxy instances.  I could be wrong though.

 Thanks,
 Brandon


 On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
  Hi,
 
  My 2 cents for the multiple listeners per load balancer discussion: We
 have customers who like to have a listener on port 80 and one on port 443
 on the same VIP (we had to patch libra to allow two listeners in one
 single haproxy) - so having that would be great.
 
  I like the proposed status :-)
 
  Thanks,
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Sunday, August 17, 2014 8:57 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
  Oh hello again!
 
  You know the drill!
 
  On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
   Hi Brandon,
  
  
   Responses in-line:
  
   On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Comments in-line
  
   On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
Hi folks,
   
   
I'm OK with going with no shareable child entities
   (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.).
   This will
simplify a lot of things (like status reporting), and we can
   probably
safely work under the assumption that any user who has a use
   case in
which a shared entity is useful is probably also technically
   savvy
enough to not only be able to manage consistency problems
   themselves,
but is also likely to want to have that level of control.
   
   
Also, an haproxy instance should map to a single listener.
   This makes
management of the configuration template simpler and the
   behavior of a
single haproxy instance more predictable. Also, when it
   comes to
configuration updates (as will happen, say, when a new
   member gets
added to a pool), it's less risky and error prone

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-17 Thread Brandon Logan
Oh hello again!

You know the drill!

On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
 Hi Brandon,
 
 
 Responses in-line:
 
 On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 Comments in-line
 
 On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
  Hi folks,
 
 
  I'm OK with going with no shareable child entities
 (Listeners, Pools,
  Members, TLS-related objects, L7-related objects, etc.).
 This will
  simplify a lot of things (like status reporting), and we can
 probably
  safely work under the assumption that any user who has a use
 case in
  which a shared entity is useful is probably also technically
 savvy
  enough to not only be able to manage consistency problems
 themselves,
  but is also likely to want to have that level of control.
 
 
  Also, an haproxy instance should map to a single listener.
 This makes
  management of the configuration template simpler and the
 behavior of a
  single haproxy instance more predictable. Also, when it
 comes to
  configuration updates (as will happen, say, when a new
 member gets
  added to a pool), it's less risky and error prone to restart
 the
  haproxy instance for just the affected listener, and not for
 all
  listeners on the Octavia VM. The only down-sides I see are
 that we
  consume slightly more memory, we don't have the advantage of
 a shared
  SSL session cache (probably doesn't matter for 99.99% of
 sites using
  TLS anyway), and certain types of persistence wouldn't carry
 over
  between different listeners if they're implemented poorly by
 the
  user. :/  (In other words, negligible down-sides to this.)
 
 
 This is fine by me for now, but I think this might be
 something we can
 revisit later after we have the advantage of hindsight.  Maybe
 a
 configurable option.
 
 
 Sounds good, as long as we agree on a path forward. In the mean time,
 is there anything I'm missing which would be a significant advantage
 of having multiple Listeners configured in a single haproxy instance?
 (Or rather, where a single haproxy instance maps to a loadbalancer
 object?)

No particular reason as of now.  Just feel like that could be something
that could hinder a particular feature or even performance in the
future.  It's not rooted in any fact or past experience.

  
 I have no problem with this. However, one thing I often do
 think about
 is that it's not really ever going to be load balancing
 anything with
 just a load balancer and listener.  It has to have a pool and
 members as
 well.  So having ACTIVE on the load balancer and listener, and
 still not
 really load balancing anything is a bit odd.  Which is why I'm
 in favor
 of only doing creates by specifying the entire tree in one
 call
 (loadbalancer-listeners-pool-members).  Feel free to
 disagree with me
 on this because I know this not something everyone likes.  I'm
 sure I am
 forgetting something that makes this a hard thing to do.  But
 if this
 were the case, then I think only having the provisioning
 status on the
 load balancer makes sense again.  The reason I am advocating
 for the
 provisioning status on the load balancer is because it still
 simpler,
 and only one place to look to see if everything were
 successful or if
 there was an issue.
 
 
 Actually, there is one case where it makes sense to have an ACTIVE
 Listener when that listener has no pools or members:  Probably the 2nd
 or 3rd most common type of load balancing service we deploy is just
 an HTTP listener on port 80 that redirects all requests to the HTTPS
 listener on port 443. While this can be done using a (small) pool of
 back-end servers responding to the port 80 requests, there's really no
 point in not having the haproxy instance do this redirect directly for
 sites that want all access to happen over SSL. (For users that want
 them we also insert HSTS headers when we do this... but I digress. ;)
 )
 
 
 Anyway, my point is that there is a common production use case that
 calls for a listener with no pools or members.

Yeah we do HTTPS redirect too (or HTTP redirect as I would call it...I
could digress myself).  I don't think its common for our customers, but
it obviously should still be supported.  Also, wouldn't that break the
only one listener per instance rule? Also also, I think haproxy 1.5 has
redirect scheme option that might do away with the extra 

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-16 Thread Stephen Balukoff
Hi Brandon,

Responses in-line:

On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Comments in-line

 On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
  Hi folks,
 
 
  I'm OK with going with no shareable child entities (Listeners, Pools,
  Members, TLS-related objects, L7-related objects, etc.). This will
  simplify a lot of things (like status reporting), and we can probably
  safely work under the assumption that any user who has a use case in
  which a shared entity is useful is probably also technically savvy
  enough to not only be able to manage consistency problems themselves,
  but is also likely to want to have that level of control.
 
 
  Also, an haproxy instance should map to a single listener. This makes
  management of the configuration template simpler and the behavior of a
  single haproxy instance more predictable. Also, when it comes to
  configuration updates (as will happen, say, when a new member gets
  added to a pool), it's less risky and error prone to restart the
  haproxy instance for just the affected listener, and not for all
  listeners on the Octavia VM. The only down-sides I see are that we
  consume slightly more memory, we don't have the advantage of a shared
  SSL session cache (probably doesn't matter for 99.99% of sites using
  TLS anyway), and certain types of persistence wouldn't carry over
  between different listeners if they're implemented poorly by the
  user. :/  (In other words, negligible down-sides to this.)

 This is fine by me for now, but I think this might be something we can
 revisit later after we have the advantage of hindsight.  Maybe a
 configurable option.


Sounds good, as long as we agree on a path forward. In the mean time, is
there anything I'm missing which would be a significant advantage of having
multiple Listeners configured in a single haproxy instance? (Or rather,
where a single haproxy instance maps to a loadbalancer object?)


 I have no problem with this. However, one thing I often do think about
 is that it's not really ever going to be load balancing anything with
 just a load balancer and listener.  It has to have a pool and members as
 well.  So having ACTIVE on the load balancer and listener, and still not
 really load balancing anything is a bit odd.  Which is why I'm in favor
 of only doing creates by specifying the entire tree in one call
 (loadbalancer-listeners-pool-members).  Feel free to disagree with me
 on this because I know this not something everyone likes.  I'm sure I am
 forgetting something that makes this a hard thing to do.  But if this
 were the case, then I think only having the provisioning status on the
 load balancer makes sense again.  The reason I am advocating for the
 provisioning status on the load balancer is because it still simpler,
 and only one place to look to see if everything were successful or if
 there was an issue.


Actually, there is one case where it makes sense to have an ACTIVE Listener
when that listener has no pools or members:  Probably the 2nd or 3rd most
common type of load balancing service we deploy is just an HTTP listener
on port 80 that redirects all requests to the HTTPS listener on port 443.
While this can be done using a (small) pool of back-end servers responding
to the port 80 requests, there's really no point in not having the haproxy
instance do this redirect directly for sites that want all access to happen
over SSL. (For users that want them we also insert HSTS headers when we do
this... but I digress. ;) )

Anyway, my point is that there is a common production use case that calls
for a listener with no pools or members.



 Again though, what you've proposed I am entirely fine with because it
 works great with having to create a load balancer first, then listener,
 and so forth.  It would also work fine with a single create call as
 well.


We should probably create more formal API documentation, eh. :)  (Let me
pull up my drafts from 5 months ago...)


 
  I don't think that these kinds of status are useful / appropriate for
  Pool, Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule
  objects, as ultimately this boils down to configuration lines in an
  haproxy config somewhere, and really the Listener status is what will
  be affected when things are changed.

 Total agreement on this.
 
  I'm basically in agreement with Brandon on his points with operational
  status, though I would like to see these broken out into their various
  meanings for the different object types. I also think some object
  types won't need an operational status (eg. L7 Policies,
  healthmonitors, etc.) since these essentially boil down to lines in an
  haproxy configuration file.

 Yeah I was thinking could be more descriptive status names for the load
 balancer and listener statuses.  I was thinking load balancer could have
 PENDING_VIP_CREATE/UPDATE/DELETE, but then that'd be painting us into a
 corner.  More general is needed.  With 

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-15 Thread Eichberger, German
--Basically no shareable entities.
+1

That will make me insanely happy :-)

Regarding Listeners: I was assuming that a LoadBalancer would map to an haproxy 
instance - and a listener would be part of that haproxy. But I heard Stephen 
say that this so not so clear cut. So maybe listeners map to haproxy 
instances...

German

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Thursday, August 14, 2014 10:17 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Octavia] Object Model and DB Structure

So I've been assuming that the Octavia object model would be an exact copy of 
the neutron lbaas one with additional information for Octavia.
However, after thinking about it I'm not sure this is the right way to go 
because the object model in neutron lbaas may change in the future, and Octavia 
can't just change it's object model when neutron lbaas/openstack lbaas changes 
it's object model.  So if there are any lessons learned we would like to apply 
to Octavia's object model now is the time.

Entity name changes are also on the table if people don't really like some of 
the names.  Even adding new entities or removing entities if there are good 
reasons isn't out of the question.

Anyway here are a few of my suggestions.  Please add on to this if you want.  
Also, just flat out tell me I'm wrong on some of htese suggestions if you feel 
as such.

A few improvements I'd suggest (using the current entity names):
-A real root object that is the only top level object (loadbalancer).
--This would be 1:M relationship with Listeners, but Listeners would only be 
children of loadbalancers.
--Pools, Members, and Health Monitors would follow the same workflow.
--Basically no shareable entities.

-Provisioning status only on the root object (loadbalancer).
--PENDING_CREATE, PENDING_UPDATE, PENDING_DELETE, ACTIVE (No need for a 
DEFEERRED status! YAY!) --Also maybe a DELETED status.

-Operating status on other entities
--ACTIVE or ONLINE, DEGRADED, INACTIVE or OFFLINE --Pools and Members 
--Listeners have been mentioned but I'd like to hear more details on that.

-Adding status_description field in, or something similar.  Would only eixst on 
loadbalancer entity if loadbalancer is the only top level object.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-15 Thread Brandon Logan
Yeah, need details on that.  Maybe he's talking about having haproxy
listen on many ips and ports, each one being a separate front end
section and in the haproxy config with each mapped to its own
default_backend.

Even if that is the case, the load balancer + listener woudl still make
up one of those frontends so the mapping would still be correct.
Though, maybe a different structure would make more sense if that is the
case.

Also, I've created a WIP review of the initial database structure:
https://review.openstack.org/#/c/114671/

Added my own comments so everyone please look at that.  Stephen, if you
could comment on what German mentioned that'd be great.

Have a good weekend!

-Brandon

On Fri, 2014-08-15 at 20:34 +, Eichberger, German wrote:
 --Basically no shareable entities.
 +1
 
 That will make me insanely happy :-)
 
 Regarding Listeners: I was assuming that a LoadBalancer would map to an 
 haproxy instance - and a listener would be part of that haproxy. But I heard 
 Stephen say that this so not so clear cut. So maybe listeners map to haproxy 
 instances...
 
 German
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Thursday, August 14, 2014 10:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Octavia] Object Model and DB Structure
 
 So I've been assuming that the Octavia object model would be an exact copy of 
 the neutron lbaas one with additional information for Octavia.
 However, after thinking about it I'm not sure this is the right way to go 
 because the object model in neutron lbaas may change in the future, and 
 Octavia can't just change it's object model when neutron lbaas/openstack 
 lbaas changes it's object model.  So if there are any lessons learned we 
 would like to apply to Octavia's object model now is the time.
 
 Entity name changes are also on the table if people don't really like some of 
 the names.  Even adding new entities or removing entities if there are good 
 reasons isn't out of the question.
 
 Anyway here are a few of my suggestions.  Please add on to this if you want.  
 Also, just flat out tell me I'm wrong on some of htese suggestions if you 
 feel as such.
 
 A few improvements I'd suggest (using the current entity names):
 -A real root object that is the only top level object (loadbalancer).
 --This would be 1:M relationship with Listeners, but Listeners would only be 
 children of loadbalancers.
 --Pools, Members, and Health Monitors would follow the same workflow.
 --Basically no shareable entities.
 
 -Provisioning status only on the root object (loadbalancer).
 --PENDING_CREATE, PENDING_UPDATE, PENDING_DELETE, ACTIVE (No need for a 
 DEFEERRED status! YAY!) --Also maybe a DELETED status.
 
 -Operating status on other entities
 --ACTIVE or ONLINE, DEGRADED, INACTIVE or OFFLINE --Pools and Members 
 --Listeners have been mentioned but I'd like to hear more details on that.
 
 -Adding status_description field in, or something similar.  Would only eixst 
 on loadbalancer entity if loadbalancer is the only top level object.
 
 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-15 Thread Stephen Balukoff
Hi folks,

I'm OK with going with no shareable child entities (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.). This will simplify
a lot of things (like status reporting), and we can probably safely work
under the assumption that any user who has a use case in which a shared
entity is useful is probably also technically savvy enough to not only be
able to manage consistency problems themselves, but is also likely to want
to have that level of control.

Also, an haproxy instance should map to a single listener. This makes
management of the configuration template simpler and the behavior of a
single haproxy instance more predictable. Also, when it comes to
configuration updates (as will happen, say, when a new member gets added to
a pool), it's less risky and error prone to restart the haproxy instance
for just the affected listener, and not for all listeners on the Octavia
VM. The only down-sides I see are that we consume slightly more memory, we
don't have the advantage of a shared SSL session cache (probably doesn't
matter for 99.99% of sites using TLS anyway), and certain types of
persistence wouldn't carry over between different listeners if they're
implemented poorly by the user. :/  (In other words, negligible down-sides
to this.)

Other upsides: This allows us to set different global haproxy settings
differently per listener as appropriate. (ex. It might make sense to have
one of the several forms of keepalive enabled for the TERMINATED_HTTPS
listener for performance reasons, but disable keepalive for the HTTP
listener for different performance reasons.)

I do want to note though, that this also affects the discussion on statuses:

On the statuses:  If we're using a separate haproxy instance per listener,
I think that probably both the loadbalancer and listener objects have
different needs here that are appropriate. Specifically, this is what I'm
thinking, regarding the statuses and what they mean:

Loadbalancer:
  PENDING_CREATE: VIP address is being assigned (reserved, or put on a
port) in Neutron, or is being allocated on Octavia VMs.
  ACTIVE: VIP address is up and running on at least one Octavia VM (ex. a
ping check would succeed, assuming no blocking firewall rules)
  PENDING_DELETE: VIP address is being removed from Octavia VM(s) and
reservation in Neutron released
 (Is there any need for a PENDING_UPDATE status for a loadbalancer?
Shouldn't the vip_address be immutable after it's created?)

Listener:
 PENDING_CREATE: A new Listener haproxy configuration is being created on
Octavia VM(s)
 PENDING_UPDATE: An existing Listener haproxy configuration is being
updated on Octavia VM(s)
 PENDING_DELETE: Listener haproxy configuration is about to be deleted off
associated Octavia VM(s)
 ACTIVE: haproxy Listener is up and running (ex. responds to TCP SYN check).

I don't think that these kinds of status are useful / appropriate for Pool,
Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule objects, as
ultimately this boils down to configuration lines in an haproxy config
somewhere, and really the Listener status is what will be affected when
things are changed.

I'm basically in agreement with Brandon on his points with operational
status, though I would like to see these broken out into their various
meanings for the different object types. I also think some object types
won't need an operational status (eg. L7 Policies, healthmonitors, etc.)
since these essentially boil down to lines in an haproxy configuration file.

Does this make sense?

Stephen



On Fri, Aug 15, 2014 at 3:10 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Yeah, need details on that.  Maybe he's talking about having haproxy
 listen on many ips and ports, each one being a separate front end
 section and in the haproxy config with each mapped to its own
 default_backend.

 Even if that is the case, the load balancer + listener woudl still make
 up one of those frontends so the mapping would still be correct.
 Though, maybe a different structure would make more sense if that is the
 case.

 Also, I've created a WIP review of the initial database structure:
 https://review.openstack.org/#/c/114671/

 Added my own comments so everyone please look at that.  Stephen, if you
 could comment on what German mentioned that'd be great.

 Have a good weekend!

 -Brandon

 On Fri, 2014-08-15 at 20:34 +, Eichberger, German wrote:
  --Basically no shareable entities.
  +1
 
  That will make me insanely happy :-)
 
  Regarding Listeners: I was assuming that a LoadBalancer would map to an
 haproxy instance - and a listener would be part of that haproxy. But I
 heard Stephen say that this so not so clear cut. So maybe listeners map to
 haproxy instances...
 
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Thursday, August 14, 2014 10:17 PM
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [Octavia] Object Model and DB Structure

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-15 Thread Brandon Logan
Comments in-line

On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
 Hi folks,
 
 
 I'm OK with going with no shareable child entities (Listeners, Pools,
 Members, TLS-related objects, L7-related objects, etc.). This will
 simplify a lot of things (like status reporting), and we can probably
 safely work under the assumption that any user who has a use case in
 which a shared entity is useful is probably also technically savvy
 enough to not only be able to manage consistency problems themselves,
 but is also likely to want to have that level of control.
 
 
 Also, an haproxy instance should map to a single listener. This makes
 management of the configuration template simpler and the behavior of a
 single haproxy instance more predictable. Also, when it comes to
 configuration updates (as will happen, say, when a new member gets
 added to a pool), it's less risky and error prone to restart the
 haproxy instance for just the affected listener, and not for all
 listeners on the Octavia VM. The only down-sides I see are that we
 consume slightly more memory, we don't have the advantage of a shared
 SSL session cache (probably doesn't matter for 99.99% of sites using
 TLS anyway), and certain types of persistence wouldn't carry over
 between different listeners if they're implemented poorly by the
 user. :/  (In other words, negligible down-sides to this.)

This is fine by me for now, but I think this might be something we can
revisit later after we have the advantage of hindsight.  Maybe a
configurable option.

 Other upsides: This allows us to set different global haproxy
 settings differently per listener as appropriate. (ex. It might make
 sense to have one of the several forms of keepalive enabled for the
 TERMINATED_HTTPS listener for performance reasons, but disable
 keepalive for the HTTP listener for different performance reasons.)
 
 
 I do want to note though, that this also affects the discussion on
 statuses:
 
 
 On the statuses:  If we're using a separate haproxy instance per
 listener, I think that probably both the loadbalancer and listener
 objects have different needs here that are appropriate. Specifically,
 this is what I'm thinking, regarding the statuses and what they mean:
 
 
 Loadbalancer:
   PENDING_CREATE: VIP address is being assigned (reserved, or put on a
 port) in Neutron, or is being allocated on Octavia VMs.
   ACTIVE: VIP address is up and running on at least one Octavia VM
 (ex. a ping check would succeed, assuming no blocking firewall rules)
   PENDING_DELETE: VIP address is being removed from Octavia VM(s) and
 reservation in Neutron released
  (Is there any need for a PENDING_UPDATE status for a loadbalancer?
 Shouldn't the vip_address be immutable after it's created?)
 
 
 Listener:
  PENDING_CREATE: A new Listener haproxy configuration is being created
 on Octavia VM(s)
  PENDING_UPDATE: An existing Listener haproxy configuration is being
 updated on Octavia VM(s)
  PENDING_DELETE: Listener haproxy configuration is about to be deleted
 off associated Octavia VM(s)
  ACTIVE: haproxy Listener is up and running (ex. responds to TCP SYN
 check).

I have no problem with this. However, one thing I often do think about
is that it's not really ever going to be load balancing anything with
just a load balancer and listener.  It has to have a pool and members as
well.  So having ACTIVE on the load balancer and listener, and still not
really load balancing anything is a bit odd.  Which is why I'm in favor
of only doing creates by specifying the entire tree in one call
(loadbalancer-listeners-pool-members).  Feel free to disagree with me
on this because I know this not something everyone likes.  I'm sure I am
forgetting something that makes this a hard thing to do.  But if this
were the case, then I think only having the provisioning status on the
load balancer makes sense again.  The reason I am advocating for the
provisioning status on the load balancer is because it still simpler,
and only one place to look to see if everything were successful or if
there was an issue.

Again though, what you've proposed I am entirely fine with because it
works great with having to create a load balancer first, then listener,
and so forth.  It would also work fine with a single create call as
well.
 
 I don't think that these kinds of status are useful / appropriate for
 Pool, Member, Healthmonitor, TLS certificate id, or L7 Policy / Rule
 objects, as ultimately this boils down to configuration lines in an
 haproxy config somewhere, and really the Listener status is what will
 be affected when things are changed.

Total agreement on this.
 
 I'm basically in agreement with Brandon on his points with operational
 status, though I would like to see these broken out into their various
 meanings for the different object types. I also think some object
 types won't need an operational status (eg. L7 Policies,
 healthmonitors, etc.) since these essentially boil down to lines in an
 haproxy