Hi y'all!

I've been reading through the LoadBalancerInsance description as outlined
here and have some feedback:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance

First off, I agree that we need a container object and that the pool
shouldn't be the object root. This container object is going to have some
attributes associated with it which then will apply to all related objects
further down on the chain.  (I'm thinking, for example, that it may make
sense for the loadbalancer to have 'network_id' as an attribute, and the
associated VIPs, pools, etc. will inherit this from the container object.)

One thing that was not clear to me just yet:  Is the 'loadbalancer' object
meant, at least eventually, to be associated with an actual load balancer
device of some kind (be that the neutron node with haproxy, a vendor
appliance or a software appliance)?

If not, then I think we should use a name other than 'Loadbalancer' so we
don't confuse people. I realize I might just be harping on one of the two
truly difficult problems in software engineering (which are: Naming things,
cache invalidation, and off-by-one errors). But if a 'loadbalancer' object
isn't meant to actually be synonymous with a load balancer appliance of
some kind, the object needs a new name.

If the object and the device are meant to essentially be synonymous, then I
think we're starting off too simplistic here, and the model proposed is
going to need another significant revision when we add additional features
later on.  I suspect we'll be painting ourselves into a corner with the
LoadBalancerInstance as proposed. Specifically, I'm thinking about:


   - Operational concerns around the life cycle of a physical piece of
   infrastructure. If we're going to replace a physical load balancer, it
   often makes sense to have both the old and new load balancer defined in the
   system at the same time during the transition. If you then swap all the
   VIPs from the old to the new, suddenly all the child objects have their
   loadbalancer_id changed, which will often wreak havoc on client application
   code (who really shouldn't be hard-coding things like loadbalancer_id, but
   will do so anyway. :P ) Such transitions are much easier accomplished if
   both load balancers can exist within an overarching container object (ie.
   "cluster" in my proposal) which will never need to be swapped out.

   - Having tenants know about loadbalancer_id (if it corresponds with
   physical hardware) feels inherently un-cloud-like to me. Better that said
   tenants know about the container object (which doesn't actually correspond
   with any single physical piece of infrastructure) and not concern
   themselves with physical hardware.

   - In an active-standby or active-active HA load balancer topology (ie.
   anything other than 'single device' topology), multiple load balancers will
   carry the same configuration, as far as VIPs, Pools, Members, etc. are
   concerned. Therefore, it doesn't make sense for the 'container' object to
   be synonymous with a single device. It might be possible to hide this
   complexity from the model by having HA features exist/exposed only within
   the driver, but this seems like really backward thinking to me: Why
   shouldn't we allow API-based configuration of load balancer cluster
   topology within our model, or force clients to talk to a driver directly
   for these features?  (This is one of the hack-ish work-arounds I alluded to
   in my e-mail from Monday which is both annoying and corrected with a model
   which can accurately reflect the topology we're working with.)


   - When we add HA and auto-scaling, Heat is going to need to be able to
   manipulate these objects in ways that allow it to make decisions about what
   physical or virtual load balancers it is going to spin up / shut down /
   configure, etc. Again, hiding this complexity within the driver seems like
   the wrong way to go.

   - Side note: It's possible to still have drivers / load balancer
   appliances which do not support certain types of HA or auto-scaling
   topologies. In this case, it probably makes sense to add some kind of
   'capabilities' list that the driver publishes to the lbaas daemon when it's
   loaded.

So, I won't elaborate on the model I would propose we use instead of the
above, since I've already done so before. I'll just say that what we're
trying to solve with the LoadBalancerInstance resource in this proposal can
also be solved with the 'cluster' and 'loadbalancer' resources in the model
I've proposed, and my proposal is also capable of supporting HA and
auto-scaling topologies without further significant model changes.

Beyond this, other feedback I have:


   - I don't recommend having loadbalancer_id added to the pool, member,
   and healthmonitor objects. I see no reason a pool (and its child objects)
   needs to be restricted to a single load balancer (and it may be very
   advantageous in large clusters for it not to be).

   - I general, I prefer DRY code, so if we can avoid adding the
   loadbalancer_id attribute to existing resources except for where it's
   really needed, that's what I'd recommend. (Do we expect significant savings
   by repeating this attribute in various locations and avoiding one SQL
   query? It seems to me we're inviting annoying bugs we'll have to work out
   by having that field essentially act as a cache for the authoritative
   source of information-- ie. the load balancer (cluster) object itself.)

   - Having loadbalancer_id (cluster_id in my model) an attribute of the
   VIP makes sense. I can't think of any reason a given VIP would be
   associated with multiple load balancer (clusters).


Thanks,
Stephen


On Tue, Feb 11, 2014 at 7:12 AM, Eugene Nikanorov
<enikano...@mirantis.com>wrote:

>
> The proposed model is described here, although without a picture:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance
>


> autoscaling is something that generally out of scope of Neutron project
> and more in the scope of Heat project.
> Automated provisioning of load balancer devices as was said is what
> 'service vm framework' is trying to address.
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to