Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Eugene Nikanorov
Hi Stephen,

Some comments on comments on comments:

On Fri, May 9, 2014 at 10:25 PM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi Eugene,

 This assumes that 'VIP' is an entity that can contain both an IPv4 address
 and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should actually
 contain two ip addresses like this.)

That seems a minor issue to me. May be we can just introduce a statement
that VIP has L2 endpoint first of all?

In my mind, the main reasons I would like to see the container object are:


- It solves the colocation / apolcation (or affinity / anti-affinity)
problem for VIPs in a way that is much more intuitive to understand and
less confusing for users than either the hints included in my API, or
something based off the nova blueprint for doing the same for virtual
servers/containers. (Full disclosure: There probably would still be a need
for some anti-affinity logic at the logical load balancer level as well,
though at this point it would be an operator concern only and expressed to
the user in the flavor of the logical load balancer object, and probably
be associated with different billing strategies. The user wants a
dedicated physical load balancer? Then he should create one with this
flavor, and note that it costs this much more...)

 In fact, that can be solved by scheduling, without letting user to control
that. Flavor Framework will be able to address that.


- From my experience, users are already familiar with the concept of
what a logical load balancer actually is (ie. something that resembles a
physical or virtual appliance from their perspective). So this probably
fits into their view of the world better.

 That might be so, but apparently it goes in opposite direction than
neutron in general (i.e. more abstraction)


- It makes sense for Load Balancer as a Service to hand out logical
load balancer objects. I think this will aid in a more intuitive
understanding of the service for users who otherwise don't want to be
concerned with operations.
- This opens up the option for private cloud operators / providers to
bill based on number of physical load balancers used (if the logical load
balancer happens to coincide with physical load balancer appliances in
their implementation) in a way that is going to be seen as more fair and
more predictable to the user because the user has more control over it.
And it seems to me this is accomplished without producing any undue burden
on public cloud providers, those who don't bill this way, or those for whom
the logical load balancer doesn't coincide with physical load balancer
appliances.

 I don't see how 'loadbalancer' is better than 'VIP' here, other than being
a bit closer term to 'logical loadbalancer'.



- Attaching a flavor attribute to a logical load balancer seems like
a better idea than attaching it to the VIP. What if the user wants to
change the flavor on which their VIP is deployed (ie. without changing IP
addresses)? What if they want to do this for several VIPs at once? I can
definitely see this happening in our customer base through the lifecycle of
many of our customers' applications.

 I don't see any problems with above cases if VIP is the root object


- Having flavors associated with load balancers and not VIPs also
allows for operators to provide a lot more differing product offerings to
the user in a way that is simple for the user to understand. For example:
   - Flavor A is the cheap load balancer option, deployed on a
   shared platform used by many tenants that has fewer guarantees around
   performance and costs X.
   - Flavor B is guaranteed to be deployed on vendor Q's Super
   Special Product (tm) but to keep down costs, may be shared with other
   tenants, though not among a single tenant's load balancers unless the
   tenant uses the same load balancer id when deploying their VIPs (ie. 
 user
   has control of affinity among their own VIPs, but no control over 
 whether
   affinity happens with other tenants). It may experience variable
   performance as a result, but has higher guarantees than the above and 
 costs
   a little more.
   - Flavor C is guaranteed to be deployed on vendor P's Even
   Better Super Special Product (tm) and is also guaranteed not to be 
 shared
   among tenants. This is essentially the dedicated load balancer option
   that gets you the best guaranteed performance, but costs a lot more than
   the above.
   - ...and so on.

 Right, that's how flavors are supposed to work, but that's again unrelated
to whether we make VIP or loadbalancer our root object.



Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Stephen Balukoff
Hi Eugene,

A couple notes of clarification:

On Sat, May 10, 2014 at 2:30 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:


 On Fri, May 9, 2014 at 10:25 PM, Stephen Balukoff 
 sbaluk...@bluebox.netwrote:

 Hi Eugene,

 This assumes that 'VIP' is an entity that can contain both an IPv4
 address and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should actually
 contain two ip addresses like this.)

 That seems a minor issue to me. May be we can just introduce a statement
 that VIP has L2 endpoint first of all?


Well, sure, except the user is going to want to know what the IP
address(es) are for obvious reasons, and expect them to be taken from
subnet(s) the user specifies. Asking the user to provide a Neutron
network_id (ie. where we'll attach the L2 interface) isn't definitive here
because a neutron network can contain many subnets, and these subnets might
be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
might cause us problems if the IPv4 subnet provided and the IPv6 subnet
provided are not on the same neutron network. In that scenario, we'd need
two L2 interfaces / neutron ports to service this, and of course some way
to record this information in the model.

We could introduce the restriction that all of the IP addresses / subnets
associated with the VIP must come from the same neutron network, but this
begs the question:  Why? Why shouldn't a VIP be allowed to connect to
multiple neutron networks to service all its front-end IPs?

If the answer to the above is there's no reason or because it's easier
to implement, then I think these are not good reasons to apply these
restrictions. If the answer to the above is because nobody deploys their
IPv4 and IPv6 networks separate like that, then I think you are unfamiliar
with the environments in which many operators must survive, nor the
requirements imposed on us by our users. :P

In any case, if you agree that in the IPv4 + IPv6 case it might make sense
to allow for multiple L2 interfaces on the VIP, doesn't it then also make
more sense to define a VIP as a single IP address (ie. what the rest of the
industry calls a VIP), and call the groupings of all these IP addresses
together a 'load balancer' ? At that point the number of L2 interfaces
required to service all the IPs in this VIP grouping becomes an
implementation problem.

For what it's worth, I do go back and forth on my opinion on this one, as
you can probably tell. I'm trying to get us to a model that is first and
foremost simple to understand for users, and relatively easy for operators
and vendors to implement.


 In my mind, the main reasons I would like to see the container object are:


- It solves the colocation / apolcation (or affinity / anti-affinity)
problem for VIPs in a way that is much more intuitive to understand and
less confusing for users than either the hints included in my API, or
something based off the nova blueprint for doing the same for virtual
servers/containers. (Full disclosure: There probably would still be a need
for some anti-affinity logic at the logical load balancer level as well,
though at this point it would be an operator concern only and expressed to
the user in the flavor of the logical load balancer object, and probably
be associated with different billing strategies. The user wants a
dedicated physical load balancer? Then he should create one with this
flavor, and note that it costs this much more...)

 In fact, that can be solved by scheduling, without letting user to
 control that. Flavor Framework will be able to address that.


I never said it couldn't be solved by scheduling. In fact, my original API
proposal solves it this way!

I was saying that it's *much more intuitive to understand and less
confusing for users* to do it using a logical load balancer construct. I've
yet to see a good argument for why working with colocation_hints /
apolocation_hints or affinity grouping rules (akin to the nova model) is
*easier* *for the user to understand* than working with a logical load
balancer model.

And by the way--  maybe you didn't see this in my example below, but just
because a user is using separate load balancer objects doesn't mean the
vendor or operator needs to implement these on separate pieces of hardware.
Whether or not the operator decides to let the user have this level of
control will be expressed in the flavor.



- From my experience, users are already familiar with the concept of
what a logical load balancer actually is (ie. something that resembles a
physical or virtual appliance from their perspective). So this probably
fits into their view of the world better.

 That might be so, but apparently it goes in opposite direction than
 neutron in 

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Brandon Logan
Hi Sam,
I do not have access to those statistics.  Though, I can say that with
our current networking infrastructure customers that have multiple IPv4
or multiple IPv6 VIPs are in the minority. However, we have received
feature requests on allowing VIPs on our two main networks (public and
private).  This is mainly because of us not charging for bandwidth on
the private network provided the client resides in the same datacenter
as the load balancer (otherwise, its not accessible by the client.)

Having said that, I still would argue that the main reason for having a
load balancer to many vips to many listeners is for user expectations. A
user expects to configure a load balancer, send that configuration to
our service, and then return the details of that fully configured load
balancer back to the user.  Is your argument either 1) A user does not
expect LBaaS to accept and return load balancers or 2) Even if a user
expects this, its not that important of a detail?

Thanks,
Brandon

On Fri, 2014-05-09 at 20:37 +, Samuel Bercovici wrote:
 It boils down to two aspects:
 
 1.  How common is it for tenant to care about affinity or have
 more than a single VIP used in a way that adding an additional
 (mandatory) construct makes sense for them to handle?
 
 For example if 99% of users do not care about affinity or will only
 use a single VIP (with multiple listeners). In this case does adding
 an additional object that tenants need to know about makes sense?
 
 2.  Scheduling this so that it can be handled efficiently by
 different vendors and SLAs. We can elaborate on this F2F next week.
 
  
 
 Can providers share their statistics to assist to understand how
 common are those use cases?
 
  
 
 Regards,
 
 -Sam.
 
  
 
  
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Friday, May 09, 2014 9:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review
 thoughts
 
  
 
 Hi Eugene,
 
  
 
 
 This assumes that 'VIP' is an entity that can contain both an IPv4
 address and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should
 actually contain two ip addresses like this.)
 
 
  
 
 
 In my mind, the main reasons I would like to see the container object
 are:
 
 
  
 
 
   * It solves the colocation / apolcation (or affinity /
 anti-affinity) problem for VIPs in a way that is much more
 intuitive to understand and less confusing for users than
 either the hints included in my API, or something based off
 the nova blueprint for doing the same for virtual
 servers/containers. (Full disclosure: There probably would
 still be a need for some anti-affinity logic at the logical
 load balancer level as well, though at this point it would be
 an operator concern only and expressed to the user in the
 flavor of the logical load balancer object, and probably be
 associated with different billing strategies. The user wants
 a dedicated physical load balancer? Then he should create one
 with this flavor, and note that it costs this much more...)
   * From my experience, users are already familiar with the
 concept of what a logical load balancer actually is (ie.
 something that resembles a physical or virtual appliance from
 their perspective). So this probably fits into their view of
 the world better.
   * It makes sense for Load Balancer as a Service to hand out
 logical load balancer objects. I think this will aid in a more
 intuitive understanding of the service for users who otherwise
 don't want to be concerned with operations.
   * This opens up the option for private cloud operators /
 providers to bill based on number of physical load balancers
 used (if the logical load balancer happens to coincide with
 physical load balancer appliances in their implementation) in
 a way that is going to be seen as more fair and more
 predictable to the user because the user has more control
 over it. And it seems to me this is accomplished without
 producing any undue burden on public cloud providers, those
 who don't bill this way, or those for whom the logical load
 balancer doesn't coincide with physical load balancer
 appliances.
   * Attaching a flavor attribute to a logical load balancer
 seems like a better idea than attaching it to the VIP. What if
 the user wants to change the flavor on which their VIP is
 deployed (ie. without changing IP addresses)? What if they
 want to do this for several VIPs at once? I can

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Brandon Logan
On Sat, 2014-05-10 at 09:50 -0700, Stephen Balukoff wrote:

 
 Correct me if I'm wrong, but wasn't the existing API is confusing and
 difficult to use one of the major complaints with it (as voiced in
 the IRC meeting, say on April 10th in IRC, starting around... I
 dunno... 14:13 GMT)?  If that's the case, then the user experience
 seems like an important concern, and possibly trumps some vaguely
 defined project direction which apparently doesn't take this into
 account if it's vetoing an approach which is more easily done /
 understood by the user.

+1 Stephen

This is what the API we are proposing accomplishes.  It is not
confusing.  Does having a root object of VIP work? Yes, but anything can
be made to work.  It's more about what makes sense.  To me going with an
API similar to the existing one does not address this issue at all.
Also, what happened to a totally brand new API and object model in
neutron V3?  I thought that was still on the table, and its the perfect
time to create a new load balancer API, backwards compatibility is not
expected.

I'd also like to ask why it seems to not matter at all if most (if not
all) operators like an API proposal?

Thanks,
Brandon Logan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Jay Pipes

On 05/09/2014 04:37 PM, Samuel Bercovici wrote:

It boils down to two aspects:

1.How common is it for tenant to care about affinity or have more than a
single VIP used in a way that adding an additional (mandatory) construct
makes sense for them to handle?

For example if 99% of users do not care about affinity or will only use
a single VIP (with multiple listeners). In this case does adding an
additional object that tenants need to know about makes sense?


Yes, it does make sense. Because it is the difference between an API 
that users intuitively understand and an API that nobody uses because it 
doesn't model what the user intuitively is thinking about.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Eugene Nikanorov
Hi Stephen,

Well, sure, except the user is going to want to know what the IP
 address(es) are for obvious reasons, and expect them to be taken from
 subnet(s) the user specifies. Asking the user to provide a Neutron
 network_id (ie. where we'll attach the L2 interface) isn't definitive here
 because a neutron network can contain many subnets, and these subnets might
 be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
 might cause us problems if the IPv4 subnet provided and the IPv6 subnet
 provided are not on the same neutron network. In that scenario, we'd need
 two L2 interfaces / neutron ports to service this, and of course some way
 to record this information in the model.

Right, that's why VIP need to have clear definition in relation to L2 port:
we allow one L2 port per VIP, hence only addresses from subnets from one
network are allowed. That seems to be a fair limitation.

We could introduce the restriction that all of the IP addresses / subnets
 associated with the VIP must come from the same neutron network,

Right.

 but this begs the question:  Why? Why shouldn't a VIP be allowed to
 connect to multiple neutron networks to service all its front-end IPs?


 If the answer to the above is there's no reason or because it's easier
 to implement, then I think these are not good reasons to apply these
 restrictions. If the answer to the above is because nobody deploys their
 IPv4 and IPv6 networks separate like that, then I think you are unfamiliar
 with the environments in which many operators must survive, nor the
 requirements imposed on us by our users. :P

I approach this question from opposite side: if we allow this - we're
exposing 'virtual appliance'-API, where user fully controls how lb instance
is wired, how many VIPs it has, etc.
As i said in other thread, that is 'virtual functions vs virtualized
appliance' question which is about general neutron project goal.
If something seem to map more easily on physical infrastructure (or to a
concept o physical infra) doesn't mean that cloud API needs to follow that.


 In any case, if you agree that in the IPv4 + IPv6 case it might make sense
 to allow for multiple L2 interfaces on the VIP, doesn't it then also make
 more sense to define a VIP as a single IP address (ie. what the rest of the
 industry calls a VIP), and call the groupings of all these IP addresses
 together a 'load balancer' ? At that point the number of L2 interfaces
 required to service all the IPs in this VIP grouping becomes an
 implementation problem.

 For what it's worth, I do go back and forth on my opinion on this one, as
 you can probably tell. I'm trying to get us to a model that is first and
 foremost simple to understand for users, and relatively easy for operators
 and vendors to implement.

Users are different, and you apparently consider those who understand
networks and load balancing.

I was saying that it's *much more intuitive to understand and less
 confusing for users* to do it using a logical load balancer construct.
 I've yet to see a good argument for why working with colocation_hints /
 apolocation_hints or affinity grouping rules (akin to the nova model) is
 *easier* *for the user to understand* than working with a logical load
 balancer model.

Something done by hand may be much more intuitive than something performed
by magic behind scheduling, flavors etc.
But that doesn't seem like a good reason to me to put user in charge of
defining resource placement.



 And by the way--  maybe you didn't see this in my example below, but just
 because a user is using separate load balancer objects doesn't mean the
 vendor or operator needs to implement these on separate pieces of hardware.
 Whether or not the operator decides to let the user have this level of
 control will be expressed in the flavor.

Yes, and without container user has less than that - only balancing
endpoints - VIPs, without direct control of how they are grouped within
instances.

That might be so, but apparently it goes in opposite direction than neutron
 in general (i.e. more abstraction)


 Doesn't more abstraction give vendors and operators more flexibility in
 how they implement it? Isn't that seen as a good thing in general? In any
 case, this sounds like your opinion more than an actual stated or implied
 agenda from the Neutron team. And even if it is an implied or stated
 agenda, perhaps it's worth revisiting the reason for having it?

I'm translating the argument of other team members and it seems valid to me.
For sure you can try to revisit those reasons ;)



 So what are the main arguments against having this container object? In
 answering this question, please keep in mind:


- If you say implementation details, please just go ahead and be
more specific because that's what I'm going to ask you to do anyway. If
implementation details is the concern, please follow this with a
hypothetical or concrete example as to what kinds of 

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Eugene Nikanorov
Carlos,

The general objection is that if we don't need multiple VIPs (different ip,
not just tcp ports) per single logical loadbalancer, then we don't need
loadbalancer because everything else is addressed by VIP playing a role of
loadbalancer.
Regarding conclusions - I think we've heard enough negative opinions on the
idea of 'container' to at least postpone this discussion to the point when
we'll get some important use cases that could not be addressed by 'VIP as
loadbalancer'

Eugene.

On Fri, May 9, 2014 at 8:33 AM, Carlos Garza carlos.ga...@rackspace.comwrote:


  On May 8, 2014, at 2:45 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

  Hi Carlos,

   Are you saying that we should only have a loadbalancer resource
 only in the case where we want it to span multiple L2 networks as if it
 were a router? I don't see how you arrived at that conclusion. Can you
 explain further.

 No, I mean that loadbalancer instance is needed if we need several
 *different* L2 endpoints for several front ends.
 That's basically 'virtual appliance' functionality that we've discussed on
 today's meeting.


 From looking at the irc log it looks like nothing conclusive came out
 of the meeting. I don't understand a lot of the conclusions you arrive at.
 For example your rejecting the notion of a loadbalancer concrete object
 unless its needed to include multi l2 network support. Will you make an
 honest effort to describe your objections here in the ML cause if we can't
 resolve it here its going to spill over into the summit. I certainly don't
 want this to dominate the summit.



Eugene.
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Eugene Nikanorov
Hi Brandon

Let me know if I am misunderstanding this,and please explain it
  further.
 A single neutron port can have many fixed ips on many subnets.  Since
 this is the case you're saying that there is no need for the API to
 define multiple VIPs since a single neutron port can represent all the
 IPs that all the VIPs require?

Right, if you want to to have both ipv4 and ipv6 addresses on the VIP then
it's possible with single neutron port.
So multiple VIPs for this case are not needed.

Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Carlos Garza

On May 9, 2014, at 3:26 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com
 wrote:

Carlos,

The general objection is that if we don't need multiple VIPs (different ip, not 
just tcp ports) per single logical loadbalancer, then we don't need 
loadbalancer because everything else is addressed by VIP playing a role of 
loadbalancer.

Thats pretty much our objection. You seem to be masquerading vips as if 
they were loadbalancers. APIs that don't model reality are not a good fit as 
far as were concerned.

We do not recognize the logical connection between we will use a 
loadbalancer top level object if and only if it will contain multiple ports or 
vips. We view this as a straw man attempt to get those in favor of a 
loadbalancer top level object to some how reform an argument that we now need 
multiple ports, vips etc which isn't what we are arguing at all.

I have no doubt that even if we ever did have a use case for this you'll just 
reject the use case or come up with another bizarre constraint as to why we 
Don't need a loadbalancer top level object.
That was never the argument we were trying to make in the first place.

Regarding conclusions - I think we've heard enough negative opinions on the 
idea of 'container' to at least postpone this discussion to the point when 
we'll get some important use cases that could not be addressed by 'VIP as 
loadbalancer'

We haven't really heard any Negative opinions other that what is coming 
from you and Sam. And it looks like Sam's objection is that he has predefined 
physical loadbalancers already sitting on a rack. For example if he has a rack 
of 8 physical loadbalancers then he only has 8 loadbalancer_ids and that are 
shared by many users and for some reason this is locking him into the belief 
that he shouldn't expose loadbalancer objects directly to the customer. This is 
some what alien to us as we also have physicals in our CLB1.0 product but we 
still use the notion of loadbalancer objects that are shared across a single 
sting ray host. We don't equate a loadbalancer with an actual sting ray host.

If same needs help wrapping a virtual loadbalancer object in his API let us 
know we would like to help with that as we firmly know its awkward to take 
something such as neutron/lbaas and interpret it to be Virtual Ips as a 
service.  We've done that with our API in CLB1.0.

Carlos.

Eugene.

On Fri, May 9, 2014 at 8:33 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On May 8, 2014, at 2:45 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.
No, I mean that loadbalancer instance is needed if we need several *different* 
L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on 
today's meeting.

   From looking at the irc log it looks like nothing conclusive came out of the 
meeting. I don't understand a lot of the conclusions you arrive at. For example 
your rejecting the notion of a loadbalancer concrete object unless its needed 
to include multi l2 network support. Will you make an honest effort to describe 
your objections here in the ML cause if we can't resolve it here its going to 
spill over into the summit. I certainly don't want this to dominate the summit.



Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Stephen Balukoff
Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address
and an IPv6 address. This is how it is in the API proposal and
corresponding object model that I suggested, but it is a slight
re-definition of the term virtual IP as it's used in the rest of the
industry. (And again, we're not yet in agreement that 'VIP' should actually
contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:


   - It solves the colocation / apolcation (or affinity / anti-affinity)
   problem for VIPs in a way that is much more intuitive to understand and
   less confusing for users than either the hints included in my API, or
   something based off the nova blueprint for doing the same for virtual
   servers/containers. (Full disclosure: There probably would still be a need
   for some anti-affinity logic at the logical load balancer level as well,
   though at this point it would be an operator concern only and expressed to
   the user in the flavor of the logical load balancer object, and probably
   be associated with different billing strategies. The user wants a
   dedicated physical load balancer? Then he should create one with this
   flavor, and note that it costs this much more...)
   - From my experience, users are already familiar with the concept of
   what a logical load balancer actually is (ie. something that resembles a
   physical or virtual appliance from their perspective). So this probably
   fits into their view of the world better.
   - It makes sense for Load Balancer as a Service to hand out logical
   load balancer objects. I think this will aid in a more intuitive
   understanding of the service for users who otherwise don't want to be
   concerned with operations.
   - This opens up the option for private cloud operators / providers to
   bill based on number of physical load balancers used (if the logical load
   balancer happens to coincide with physical load balancer appliances in
   their implementation) in a way that is going to be seen as more fair and
   more predictable to the user because the user has more control over it.
   And it seems to me this is accomplished without producing any undue burden
   on public cloud providers, those who don't bill this way, or those for whom
   the logical load balancer doesn't coincide with physical load balancer
   appliances.
   - Attaching a flavor attribute to a logical load balancer seems like a
   better idea than attaching it to the VIP. What if the user wants to change
   the flavor on which their VIP is deployed (ie. without changing IP
   addresses)? What if they want to do this for several VIPs at once? I can
   definitely see this happening in our customer base through the lifecycle of
   many of our customers' applications.
   - Having flavors associated with load balancers and not VIPs also allows
   for operators to provide a lot more differing product offerings to the user
   in a way that is simple for the user to understand. For example:
  - Flavor A is the cheap load balancer option, deployed on a
  shared platform used by many tenants that has fewer guarantees around
  performance and costs X.
  - Flavor B is guaranteed to be deployed on vendor Q's Super
  Special Product (tm) but to keep down costs, may be shared with other
  tenants, though not among a single tenant's load balancers unless the
  tenant uses the same load balancer id when deploying their VIPs (ie. user
  has control of affinity among their own VIPs, but no control over whether
  affinity happens with other tenants). It may experience variable
  performance as a result, but has higher guarantees than the
above and costs
  a little more.
  - Flavor C is guaranteed to be deployed on vendor P's Even Better
  Super Special Product (tm) and is also guaranteed not to be shared among
  tenants. This is essentially the dedicated load balancer
option that gets
  you the best guaranteed performance, but costs a lot more than the above.
  - ...and so on.
   - A logical load balancer object is a great demarcation point
http://en.wikipedia.org/wiki/Demarcation_point between
   operator concerns and user concerns. It seems likely that there will be an
   operator API created, and this will need to interface with the user API at
   some well-defined interface. (If you like, I can provide a couple specific
   operator concerns which are much more easily accomplished without
   disrupting the user experience using the demarc at the 'load balancer'
   instead of at the 'VIP'.)


So what are the main arguments against having this container object? In
answering this question, please keep in mind:


   - If you say implementation details, please just go ahead and be more
   specific because that's what I'm going to ask you to do anyway. If
   implementation details is the concern, please follow this with a
   hypothetical or concrete example as 

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Samuel Bercovici
It boils down to two aspects:

1.   How common is it for tenant to care about affinity or have more than a 
single VIP used in a way that adding an additional (mandatory) construct makes 
sense for them to handle?

For example if 99% of users do not care about affinity or will only use a 
single VIP (with multiple listeners). In this case does adding an additional 
object that tenants need to know about makes sense?

2.   Scheduling this so that it can be handled efficiently by different 
vendors and SLAs. We can elaborate on this F2F next week.

Can providers share their statistics to assist to understand how common are 
those use cases?

Regards,
-Sam.



From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, May 09, 2014 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address and 
an IPv6 address. This is how it is in the API proposal and corresponding object 
model that I suggested, but it is a slight re-definition of the term virtual 
IP as it's used in the rest of the industry. (And again, we're not yet in 
agreement that 'VIP' should actually contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:


  *   It solves the colocation / apolcation (or affinity / anti-affinity) 
problem for VIPs in a way that is much more intuitive to understand and less 
confusing for users than either the hints included in my API, or something 
based off the nova blueprint for doing the same for virtual servers/containers. 
(Full disclosure: There probably would still be a need for some anti-affinity 
logic at the logical load balancer level as well, though at this point it would 
be an operator concern only and expressed to the user in the flavor of the 
logical load balancer object, and probably be associated with different billing 
strategies. The user wants a dedicated physical load balancer? Then he should 
create one with this flavor, and note that it costs this much more...)
  *   From my experience, users are already familiar with the concept of what a 
logical load balancer actually is (ie. something that resembles a physical or 
virtual appliance from their perspective). So this probably fits into their 
view of the world better.
  *   It makes sense for Load Balancer as a Service to hand out logical load 
balancer objects. I think this will aid in a more intuitive understanding of 
the service for users who otherwise don't want to be concerned with operations.
  *   This opens up the option for private cloud operators / providers to bill 
based on number of physical load balancers used (if the logical load balancer 
happens to coincide with physical load balancer appliances in their 
implementation) in a way that is going to be seen as more fair and more 
predictable to the user because the user has more control over it. And it 
seems to me this is accomplished without producing any undue burden on public 
cloud providers, those who don't bill this way, or those for whom the logical 
load balancer doesn't coincide with physical load balancer appliances.
  *   Attaching a flavor attribute to a logical load balancer seems like a 
better idea than attaching it to the VIP. What if the user wants to change the 
flavor on which their VIP is deployed (ie. without changing IP addresses)? What 
if they want to do this for several VIPs at once? I can definitely see this 
happening in our customer base through the lifecycle of many of our customers' 
applications.
  *   Having flavors associated with load balancers and not VIPs also allows 
for operators to provide a lot more differing product offerings to the user in 
a way that is simple for the user to understand. For example:

 *   Flavor A is the cheap load balancer option, deployed on a shared 
platform used by many tenants that has fewer guarantees around performance and 
costs X.
 *   Flavor B is guaranteed to be deployed on vendor Q's Super Special 
Product (tm) but to keep down costs, may be shared with other tenants, though 
not among a single tenant's load balancers unless the tenant uses the same 
load balancer id when deploying their VIPs (ie. user has control of affinity 
among their own VIPs, but no control over whether affinity happens with other 
tenants). It may experience variable performance as a result, but has higher 
guarantees than the above and costs a little more.
 *   Flavor C is guaranteed to be deployed on vendor P's Even Better 
Super Special Product (tm) and is also guaranteed not to be shared among 
tenants. This is essentially the dedicated load balancer option that gets you 
the best guaranteed performance, but costs a lot more than the above.
 *   ...and so on.

  *   A logical load balancer object is a great demarcation point 
http

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Adam Harwell
Just a couple of quick comments since it is super late here and I don't want to 
reply to the entire email just yet...

Firstly, I think most of us at Rackspace like the way your proposal handles L7 
(hopefully my team actually agrees and I am not speaking out of turn, but I 
definitely like it), so I wouldn't really consider that as a difference because 
I think we'd like to incorporate your method into our proposal anyway. 
Similarly, upon further review I think I would agree that our SSL cert handling 
is also a bit wonky, and could be much improved by another draft. In fact, I'd 
like to assume that what we're really discussing is making a third revision of 
the proposal, rather than whether to use one or the other verbatim.

Secondly, technical descriptions are great, but I'd like to talk about the term 
load balancer in a more approachable manner. I forget which thread I used 
this example in before, but to get an idea of what we mean by the term, I like 
to use it in some sentences.
My web servers are behind a load balancer, so they can better serve traffic to 
my customers.
I used to only have one MySQL server, but now I have three, so I put a load 
balancer in front of them to ensure they get an equal amount of traffic.
This isn't highly technical talk -- and it is definitely very generic -- but 
this is how REAL PEOPLE see the technology we're developing here. That is part 
of why the definitions we're using are so vague. I refuse to get tied down by 
an object graph full of pools and VIPs and listeners!
There are two very similar points I'd like to make here, and I feel that both 
are very important:
1. We shouldn't be looking at the current model and deciding which object is 
the root object, or what object to rename as a  loadbalancer... That's 
totally backwards! *We don't define which object is named the loadbalancer by 
looking for the root object -- we define which object is the root by looking 
for the object named loadbalancer.* I had hoped that was clear from the JSON 
examples in our API proposal, but I think maybe there was too much focus on the 
object model chart, where this isn't nearly as clearly communicated.
2. As I believe I have also said before, if I'm using X as a Service then I 
expect to get back an object of type X. I would be very frustrated/confused 
if, as a user, LBaaS returned me an object of type VIP when I POST a Create 
for my new load balancer. On this last point, I feel like I've said this enough 
times that I'm beating a dead horse...

Anyway, I should get at least a little bit of sleep tonight, so I'll see you 
all in IRC in a few hours!

  --Adam

PS: I really hope that colloquialism translates appropriately. I've got nothing 
against horses. :)

On May 7, 2014 7:44 PM, Stephen Balukoff sbaluk...@bluebox.net wrote:
Howdy y'all!

Per the e-mail I sent out earlier today, I wanted to share some thoughts on the 
API proposals from Rackspace and Blue Box that we're currently working on 
evaluating, presumably to decide which will be the version will be the 
starting point from which we make revisions going forward.  I'll try to be 
relatively brief.

First, some thanks!

The Rackspace team really pulled out the stops this last week producing an 
abundance of documentation that very thoroughly covers a bunch of the use cases 
available at that time in excruciating detail. They've produced a glossary and 
suggested object diagram, and their proposal is actually looking pretty dang 
good in my opinion.

I'm especially interested in hearing your opinion on the stuff I'm laying out 
below-- especially if I'm misunderstanding or mis-representing your viewpoint 
on any issue, eh!

Why the review process we're using probably won't be conclusive

So, at the last week's meeting we decided that the RAX team and I would work on 
producing a spreadsheet listing the use cases in question and go over how each 
of these would be accomplished using our API.

Having gone through this exercise, I see the following problems with this 
approach to evaluation:

  *   While we have thorough documentation, it's probably more than the average 
participant here is going to go through with a fine-toothed comb to find the 
subtle differences. Furthermore, there's already a huge amount of documentation 
produced, and we've only gone over about 1/5th of the use cases!
  *   Since the BBG version API proposal is actually a revision of the 
Rackspace proposal in many ways, at its core, our models are actually so 
similar that the subtle differences don't come out with general / generic use 
cases in many ways. And the only use cases we've evaluated so far are pretty 
general. :/
  *   In fact, the only significant ways in which the proposals differ are:
 *   BBG proposal uses VIP as single-call interface, and it's the root of 
the object tree from the user's perspective.
 *   Rackspace proposal uses loadbalancer as single-call interface, and 
it's the root of the object tree from the 

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Eugene Nikanorov
Hi Stephen,

A couple of inline comments:


-
   BBG proposal just has attributes for both and IPv4 address and an
   IPv6 address in its VIP object. (Meaning it's slightly different than 
 a
   VIP as people are likely to assume what that term means.)

 This is a correct approach. VIP has single neutron port, which however may
have ip address on several subnets at once, so ipv4+ipv6 is easily solved
within 1 VIP.
I think that's a preferred way.


-

 *Maybe we should wait until the survey results are out?*
 No sense solving for use cases that nobody cares about, eh.

 *Maybe we should just look at the differences?*
 The core object models we've proposed are almost identical. Change the
 term Listener to Load Balancer in my model, and you've essentially got
 the same thing as the Rackspace model.

I guess you meant VIP, not Listener.
I think what is more important is tree-like configuration structure.
However having Loadbalancer as the root object vs VIP has difference in
meaning. Loadbalancer implies several L2 ports for the frontend (e.g.
multiple VIPs with own ip addresses) while VIP implies only one L2 port.

For example, I understand the Rackspace model is using a join object
 between load balancer and vip so these can have a n:m relationship--
 and this is almost entirely to solve use case #14 in the document.

This is clearly an overkill to share VIPs between loadbalancer instances.


 *We need to decide what load balancer means and go that.*
 This has been something that's come up a lot, and the more we ignore it,
 it seems to me, the more it just adds to the confusion to the discussion.

 Rackspace is defining a load balancer as:  An entity that contains
 multiple VIPs, but only one tcp/udp port and protocol (
 http://en.wikipedia.org/wiki/Load_balancing_%28computing%29) .

It may have a default pool (named just pool in API object).  It also may
 have a content switching object attached that defines L7 rules.

I may have missed something, did you mean one tcp/upd port and protocol per
VIP?  Or otherwise how is that possible?


 *What does the root mean when we're looking at an object graph, not a
 tree? Or maybe the people most likely to use the single-call interface
 should have the strongest voices in deciding where it should actually be
 placed?*
 I think probably the biggest source of contention over the API proposals
 thus far are what object should be considered the root of the tree.

'root object'  has the sole purpose of transforming arbitrary graph of
objects into a tree.
We can't move forward without properly defining it.

This whole concept seems to strike me as odd-- because when you have a
 graph, even if it's somewhat tree-like (ie. there are leaf nodes), does the
 term root even apply? Can someone please tell me what criteria they're
 using when they say that one object should be a root and another should
 not?

Criterias are:
- user can think of an object as representation of 'logical service
instance' (logical loadbalancer)
- workflow starts with object creation
- it makes sense to apply attributes like Flavor (service requirements) to
it.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Eugene Nikanorov
Hi Adam,

My comments inline:

 1. We shouldn't be looking at the current model and deciding which object
 is the root object, or what object to rename as a  loadbalancer... That's
 totally backwards! *We don't define which object is named the
 loadbalancer by looking for the root object -- we define which object is
 the root by looking for the object named loadbalancer.* I had hoped that
 was clear from the JSON examples in our API proposal, but I think maybe
 there was too much focus on the object model chart, where this isn't nearly
 as clearly communicated.

2. As I believe I have also said before, if I'm using X as a Service
 then I expect to get back an object of type X. I would be very
 frustrated/confused if, as a user, LBaaS returned me an object of type
 VIP when I POST a Create for my new load balancer. On this last point, I
 feel like I've said this enough times that I'm beating a dead horse...

I think we definitely should be looking at existing API/BBG proposal for
the root object.
The question about whether we need additional 'Loadbalancer' resource or
not is not a question about terminology, so (2) is not a valid argument.

What really matters in answering the question about 'loadbalancer' resource
is do we need multiple L2 ports per single loadbalancer. If we do - that
could be a justification to add it. Right now the common perception is that
this is not needed and hence, 'loadbalancer' is not required in the API or
obj model.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Carlos Garza

On May 8, 2014, at 8:01 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com
 wrote:

Hi Adam,

My comments inline:

1. We shouldn't be looking at the current model and deciding which object is 
the root object, or what object to rename as a  loadbalancer... That's 
totally backwards! *We don't define which object is named the loadbalancer by 
looking for the root object -- we define which object is the root by looking 
for the object named loadbalancer.* I had hoped that was clear from the JSON 
examples in our API proposal, but I think maybe there was too much focus on the 
object model chart, where this isn't nearly as clearly communicated.

2. As I believe I have also said before, if I'm using X as a Service then I 
expect to get back an object of type X. I would be very frustrated/confused 
if, as a user, LBaaS returned me an object of type VIP when I POST a Create 
for my new load balancer. On this last point, I feel like I've said this enough 
times that I'm beating a dead horse...

I think we definitely should be looking at existing API/BBG proposal for the 
root object.
The question about whether we need additional 'Loadbalancer' resource or not is 
not a question about terminology, so (2) is not a valid argument.

It's pretty awkward to have a REST api that doesn't have a resource 
representation of the object its supposed to be creating and handing out. It's 
really awkward to identify a loadbalancer by vip id.
Thats like trying going to a car dealership API and only being able to look up 
a car by its parking spot ID.

Do you believe Neutron/Lbaas is actually LoadBalancerVip as a Service 
which would entirely explain the disconnect we are having with you.

What really matters in answering the question about 'loadbalancer' resource is 
do we need multiple L2 ports per single loadbalancer. If we do - that could be 
a justification to add it. Right now the common perception is that this is not 
needed and hence, 'loadbalancer' is not required in the API or obj model.

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.

Thanks Carlos.


Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Eugene Nikanorov
Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in
 the case where we want it to span multiple L2 networks as if it were a
 router? I don't see how you arrived at that conclusion. Can you explain
 further.

No, I mean that loadbalancer instance is needed if we need several
*different* L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on
today's meeting.

Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Carlos Garza

On May 8, 2014, at 2:45 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.
No, I mean that loadbalancer instance is needed if we need several *different* 
L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on 
today's meeting.

   From looking at the irc log it looks like nothing conclusive came out of the 
meeting. I don't understand a lot of the conclusions you arrive at. For example 
your rejecting the notion of a loadbalancer concrete object unless its needed 
to include multi l2 network support. Will you make an honest effort to describe 
your objections here in the ML cause if we can't resolve it here its going to 
spill over into the summit. I certainly don't want this to dominate the summit.



Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-07 Thread Stephen Balukoff
Howdy y'all!

Per the e-mail I sent out earlier today, I wanted to share some thoughts on
the API proposals from Rackspace and Blue Box that we're currently working
on evaluating, presumably to decide which will be the version will be the
starting point from which we make revisions going forward.  I'll try to
be relatively brief.

*First, some thanks!*

The Rackspace team really pulled out the stops this last week producing an
abundance of documentation that very thoroughly covers a bunch of the use
cases available at that time in excruciating detail. They've produced a
glossary and suggested object diagram, and their proposal is actually
looking pretty dang good in my opinion.

I'm especially interested in hearing your opinion on the stuff I'm laying
out below-- especially if I'm misunderstanding or mis-representing your
viewpoint on any issue, eh!

*Why the review process we're using probably won't be conclusive*

So, at the last week's meeting we decided that the RAX team and I would
work on producing a spreadsheet listing the use cases in question and go
over how each of these would be accomplished using our API.

Having gone through this exercise, I see the following problems with this
approach to evaluation:

   - While we have thorough documentation, it's probably more than the
   average participant here is going to go through with a fine-toothed comb to
   find the subtle differences. Furthermore, there's already a huge amount of
   documentation produced, and we've only gone over about 1/5th of the use
   cases!
   - Since the BBG version API proposal is actually a revision of the
   Rackspace proposal in many ways, at its core, our models are actually so
   similar that the subtle differences don't come out with general / generic
   use cases in many ways. And the only use cases we've evaluated so far are
   pretty general. :/
   - In fact, the only significant ways in which the proposals differ are:
  - BBG proposal uses VIP as single-call interface, and it's the root
  of the object tree from the user's perspective.
  - Rackspace proposal uses loadbalancer as single-call interface,
  and it's the root of the object tree from the user's perspective (And the
  Rackspace loadbalancer is essentially the same thing as the BBG
  Listener)
  - Rackspace proposal allows n:m relationship between loadbalancer and
  VIP, mostly to solve the IPv6 + IPv4 use case.
  - BBG proposal just has attributes for both and IPv4 address and an
  IPv6 address in its VIP object. (Meaning it's slightly different than a
  VIP as people are likely to assume what that term means.)
  - Rackspace and BBG proposals differ quite a bit in how they
  accomplish L7 functionality and SSL certificate functionality.
  Unfortunately, none of use cases we've evaluated touch
significantly on any
  of the differences (with the exception that some L7
functionality has been
  expanded upon in some use cases).

We could go through and add more use cases which touch on the differences
between our models, but that feels to me like those would be pretty
contrived and not reflect real-world demands. Further, given how much
documentation we've produced on this so far, it's unlikely anyone would
read it all, let alone in time for a speedy decision. And it probably
doesn't make sense to expand on specific use cases until we know they're
actually needed by most prospective LBaaS users (ie. once we know the
results of the survey that Sam started.)

I hate to say it, but it feels like we're futilely spinning our wheels on
this one.

In other words, if we're going to come to consensus on this, we need to
change the evaluation process somewhat, eh.

*Maybe we should wait until the survey results are out?*
No sense solving for use cases that nobody cares about, eh.

*Maybe we should just look at the differences?*
The core object models we've proposed are almost identical. Change the term
Listener to Load Balancer in my model, and you've essentially got the
same thing as the Rackspace model.

So maybe it makes sense just to look at the differences, and how to best
resolve these?

For example, I understand the Rackspace model is using a join object
between load balancer and vip so these can have a n:m relationship--
and this is almost entirely to solve use case #14 in the document. I solved
this in mine by just twisting slightly the definition of a VIP to include
both an (optional) IPv4 and IPv6 address. However, there might be use cases
that someone comes up with where having a Listener/load balancer associated
with multiple VIPs makes sense (some horizontal scaling algorithms might do
it this way).

In any case, this difference is easily resolved by adding the n:m join
object to my model as well. (Assuming there's a valid use case to justify
this.) All it would mean, from the user perspective, is that when they look
at the attributes of Listener, the vip_id would be an array instead of a