Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-07 Thread Brandon Logan
 to use it if what we already have works for what we need?  If what
we already have doesn't, then we'd probably not use an old and busted
controller and an old and busted amphora version.

That's my take on it currently.  It is subject to change of course.

  
 
 Lastly, I interpreted the word “VM driver” in the spec along the lines
 what we have in libra: A driver interface on the Amphora agent that
 abstracts starting/stopping the haproxy if we end up on some different
 and abstracts writing the haproxy file. But that is for the agent on
 the Amphora. I am sorry I got confused  that way when reading the 0.5
 spec and I am therefore happy we can have that discussion to make
 things more clear.

I'm sure more things will come up that we've all made assumptions on and
while reading the specs we read what we thought was what it said, but
actually didn't.

  
 
 German
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Friday, September 05, 2014 6:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Octavia] Question about where to render
 haproxy configurations
 
 
  
 
 Hi German,
 
  
 
 
 Responses in-line:
 
 
  
 
 On Fri, Sep 5, 2014 at 2:31 PM, Eichberger, German
 german.eichber...@hp.com wrote:
 
 Hi Stephen,
 
  
 
 I think this is a good discussion to have and will make it more clear
 why we chose a specific design. I also believe by having this
 discussion we will make the design stronger.  I am still a little bit
 confused what the driver/controller/amphora agent roles are. In my
 driver-less design we don’t have to worry about the driver which most
 likely in haproxy’s case will be split to some degree between
 controller and amphora device.
 
 
  
 
 
 Yep, I agree that a good technical debate like this can help both to
 get many people's points of view and can help determine the technical
 merit of one design over another. I appreciate your vigorous
 participation in this process. :)
 
 
  
 
 
 So, the purpose of the controller / driver / amphora and the
 responsibilities they have are somewhat laid out in the Octavia v0.5
 component design document, but it's also possible that there weren't
 enough specifics in that document to answer the concerns brought up in
 this thread. So, to that end in my mind, I see things like the
 following:
 
 
  
 
 
 The controller:
 
 
 * Is responsible for concerns of the Octavia system as a whole,
 including the intelligence around interfacing with the networking,
 virtualization, and other layers necessary to set up the amphorae on
 the network and getting them configured.
 
 
 * Will rarely, if ever, talk directly to the end-systems or -services
 (like Neutron, Nova, etc.). Instead it goes through a clean driver
 interface for each of these.
 
 
 * The controller has direct access to the database where state is
 stored.
 
 
 * Must load at least one driver, may load several drivers and choose
 between them based on configuration logic (ex. flavors, config file,
 etc.)
 
 
  
 
 
 The driver:
 
 
 * Handles all communication to or from the amphorae
 
 
 * Is loaded by the controller (ie. its interface with the controller
 is a base class, associated methods, etc. It's objects and code, not a
 RESTful API.)
 
 
 * Speaks amphora-specific protocols on the back-end. In the case of
 the reference haproxy amphora, this will most likely be in the form
 of a RESTful API with an agent on the amp, as well as (probably)
 HMAC-signed UDP health, status and stats messages from the amp to the
 driver.
 
 
  
 
 
 The amphora:
 
 
 * Does the actual load balancing
 
 
 * Is managed by the controller through the driver.
 
 
 * Should be as dumb as possible.
 
 
 * Comes in different types, based on the software in the amphora
 image. (Though all amps of a given type should be managed by the same
 driver.) Types might include haproxy, nginx, haproxy + nginx,
 3rd party vendor X, etc.
 
 
 * Should never have direct access to the Octavia database, and
 therefore attempt to be as stateless as possible, as far as
 configuration is concerned.
 
 
  
 
 
 To be honest, our current product does not have a driver layer per
 se, since we only interface with one type of back-end. However, we
 still render our haproxy configs in the controller. :)
 
 
  
 
 
  
 
 So let’s try to sum up what we want a controller to do:
 
 - Provision new amphora devices
 
 - Monitor/Manage health
 
 - Gather stats
 
 - Manage/Perform configuration changes
 
  
 
 The driver as described would be:
 
 - Render configuration changes in a specific format,
 e.g. haproxy
 
  
 
 Amphora Device:
 
 - Communicate with the driver/controller to make
 things happen

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-07 Thread Stephen Balukoff
Hi German and Brandon,

Responses in-line:


On Sun, Sep 7, 2014 at 12:21 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Hi German,

 Comments in-line

 On Sun, 2014-09-07 at 04:49 +, Eichberger, German wrote:
  Hi Steven,
 
 
 
  Thanks for taking the time to lay out the components clearly. I think
  we are pretty much on the same pageJ
 
 
 
  Driver vs, Driver-less
 
  I strongly believe that REST is a cleaner interface/integration point
  – but  if even Brandon believes that drivers are the better approach
  (having suffered through the LBaaS v1 driver world which is not an
  advertisement for this approach) I will concede on that front. Let’s
  hope nobody makes an asynchronous driver and/or writes straight to the
  DBJ That said I still believe that adding the driver interface now
  will lead to some more complexity and I am not sure we will get the
  interface right in the first version: so let’s agree to develop with a
  driver in mind but don’t allow third party drivers before the
  interface has matured. I think that is something we already sort of
  agreed to, but I just want to make that explicit.

 I think the LBaaS V1/V2 driver approach works well enough.  The problems
 that arose from it were because most entities were root level objects
 and thus had some independent properties to them.  For example, a pool
 can exist without a listener, and a listener can exist without a load
 balancer.  The load balancer was the entity tied to the driver.  For
 Octavia, we've already agreed that everything will be a direct or
 indirect child of a load balancer so this should not be an issue.

 I agree with you that we will not get the interface right the first
 time.  I hope no one was planning on starting another driver other than
 haproxy anytime before 1.0 because I vaguely remember 2.0 being the time
 that multiple drivers can be used.  By that time the interface should be
 in a good shape.


I'm certainly comfortable with the self-imposed development restriction
that we develop only the haproxy driver until at least 1.0, and that we
don't allow multiple drivers until 2.0. This seems reasonable, as well, in
order to follow our constitutional mandate that the reference
implementation always be open source and with unencumbered licensing. (It
seems to follow logically that the open source reference driver must
necessarily lead any 3rd party drivers in feature development.)

Also, the protocol the haproxy driver will be speaking to the Octavia
haproxy amphoras will certainly be REST-like, if not completely RESTful. I
don't think anyone is disagreeing about that. (Keep in mind that REST
doesn't demand JSON or XML be used for resource representations--
 haproxy.cfg can still be a valid listener resource representation and
the interface still qualifies as RESTful.) Again, I'm still working on that
API spec, so I'd prefer to have a draft of that to discuss before we get
too much further into the specifics of that API so we have something
concrete to discuss (and don't waste time on non-specific speculative
objections), eh.


 
 
  Multiple drivers/version for the same Controller
 
  This is a really contentious point for us at HP: If we allow say
  drivers or even different versions of the same driver, e.g. A, B, C to
  run in parallel, testing will involve to test all the possible
  (version) combination to avoid potential side effects. That can get
  extensive really quick. So HP is proposing, given that we will have
  100s of controllers any way, to limit the number of drivers per
  controller to 1 to aide testing. We can revisit that at a future time
  when our testing capabilities have improved but for now I believe we
  should choose that to speed things up. I personally don’t see the need
  for multiple drivers per controller – in an operator grade environment
  we likely don’t need to “save” on the number of controllers ;-) The
  only reason we might need two drivers on the same controller is if an
  Amphora for whatever reason needs to be talked to by two drivers.
  (e.g. you install nginx and haproxy  and have a driver for each). This
  use case scares me so we should not allow it.
 
  We also see some operational simplifications from supporting only one
  driver per controller: If we have an update for driver A we don’t need
  to touch any controller running Driver B. Furthermore we can keep the
  old version running but make sure no new Amphora gets scheduled there
  to let it wind down with attrition and then stop that controller when
  it doesn’t have any more Amphora to serve.

 I also agree that we should, for now, only allow 1 driver at a time and
 revisit it after we've got a solid grasp on everything.  I honestly
 don't think we will have multiple drivers for a while anyway, so by the
 time we have a solid grasp on it we will know the complexities it will
 introduce and thus make it a permanent rule or implement it.


This sounds reasonable to me. When it comes time to support 

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-06 Thread Eichberger, German
Hi Steven,

Thanks for taking the time to lay out the components clearly. I think we are 
pretty much on the same page ☺

Driver vs, Driver-less
I strongly believe that REST is a cleaner interface/integration point – but  if 
even Brandon believes that drivers are the better approach (having suffered 
through the LBaaS v1 driver world which is not an advertisement for this 
approach) I will concede on that front. Let’s hope nobody makes an asynchronous 
driver and/or writes straight to the DB ☺ That said I still believe that adding 
the driver interface now will lead to some more complexity and I am not sure we 
will get the interface right in the first version: so let’s agree to develop 
with a driver in mind but don’t allow third party drivers before the interface 
has matured. I think that is something we already sort of agreed to, but I just 
want to make that explicit.

Multiple drivers/version for the same Controller
This is a really contentious point for us at HP: If we allow say drivers or 
even different versions of the same driver, e.g. A, B, C to run in parallel, 
testing will involve to test all the possible (version) combination to avoid 
potential side effects. That can get extensive really quick. So HP is 
proposing, given that we will have 100s of controllers any way, to limit the 
number of drivers per controller to 1 to aide testing. We can revisit that at a 
future time when our testing capabilities have improved but for now I believe 
we should choose that to speed things up. I personally don’t see the need for 
multiple drivers per controller – in an operator grade environment we likely 
don’t need to “save” on the number of controllers ;-) The only reason we might 
need two drivers on the same controller is if an Amphora for whatever reason 
needs to be talked to by two drivers. (e.g. you install nginx and haproxy  and 
have a driver for each). This use case scares me so we should not allow it.
We also see some operational simplifications from supporting only one driver 
per controller: If we have an update for driver A we don’t need to touch any 
controller running Driver B. Furthermore we can keep the old version running 
but make sure no new Amphora gets scheduled there to let it wind down with 
attrition and then stop that controller when it doesn’t have any more Amphora 
to serve.

Lastly, I interpreted the word “VM driver” in the spec along the lines what we 
have in libra: A driver interface on the Amphora agent that abstracts 
starting/stopping the haproxy if we end up on some different and abstracts 
writing the haproxy file. But that is for the agent on the Amphora. I am sorry 
I got confused  that way when reading the 0.5 spec and I am therefore happy we 
can have that discussion to make things more clear.

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, September 05, 2014 6:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Question about where to render haproxy 
configurations

Hi German,

Responses in-line:

On Fri, Sep 5, 2014 at 2:31 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

I think this is a good discussion to have and will make it more clear why we 
chose a specific design. I also believe by having this discussion we will make 
the design stronger.  I am still a little bit confused what the 
driver/controller/amphora agent roles are. In my driver-less design we don’t 
have to worry about the driver which most likely in haproxy’s case will be 
split to some degree between controller and amphora device.

Yep, I agree that a good technical debate like this can help both to get many 
people's points of view and can help determine the technical merit of one 
design over another. I appreciate your vigorous participation in this process. 
:)

So, the purpose of the controller / driver / amphora and the responsibilities 
they have are somewhat laid out in the Octavia v0.5 component design document, 
but it's also possible that there weren't enough specifics in that document to 
answer the concerns brought up in this thread. So, to that end in my mind, I 
see things like the following:

The controller:
* Is responsible for concerns of the Octavia system as a whole, including the 
intelligence around interfacing with the networking, virtualization, and other 
layers necessary to set up the amphorae on the network and getting them 
configured.
* Will rarely, if ever, talk directly to the end-systems or -services (like 
Neutron, Nova, etc.). Instead it goes through a clean driver interface for 
each of these.
* The controller has direct access to the database where state is stored.
* Must load at least one driver, may load several drivers and choose between 
them based on configuration logic (ex. flavors, config file, etc.)

The driver:
* Handles all communication to or from the amphorae
* Is loaded by the controller (ie. its interface

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Stephen Balukoff
Hi German,

Thanks for your reply! My responses are in-line below, and of course you
should feel free to counter my counter-points. :)

For anyone else paying attention and interested in expressing a voice here,
we'll probably be voting on this subject at next week's Octavia meeting.


On Thu, Sep 4, 2014 at 9:13 PM, Eichberger, German german.eichber...@hp.com
 wrote:

  Hi,



 Stephen visited us today (the joy of spending some days in SeattleJ) and
 we discussed that  further (and sorry for using VM – not sure what won):


Looks like Amphora won, so I'll start using that terminology below.


  1.   We will only support one driver per controller, e.g. if you
 upgrade a driver you deploy a new controller with the new driver and either
 make him take over existing VMs (minor change) or spin  up new ones (major
 change) but keep the “old” controller in place until it doesn’t serve any
 VMs any longer

Why? I agree with the idea of one back-end type per driver, but why
shouldn't we support more than one driver per controller?

I agree that you probably only want to support one version of each driver
per controller, but it seems to me it shouldn't be that difficult to write
a driver that knows how to speak different versions of back-end amphorae.
Off the top of my head I can think of two ways of doing this:

1. For each new feature or bugfix added, keep track of the minimal version
of the amphora required to use that feature/bugfix. Then, when building
your configuration, as various features are activated in the configuration,
keep a running track of the minimal amphora version required to meet that
configuration. If the configuration version is higher than the version of
the amphora you're going to update, you can pre-emptively return an error
detailing an unsupported configuration due to the back-end amphora being
too old. (What you do with this error-- fail, recycle the amphora,
whatever-- is up to the operator's policy at this point, though I would
probably recommend just recycling the amphora.) If a given user's
configuration never makes use of advanced features later on, there's no
rush to upgrade their amphoras, and new controllers can push configs that
work with the old amphoras indefinitely.

2. If the above sounds too complicated, you can forego that and simply
build the config, try to push it to the amphora, and see if you get an
error returned.  If you do, depending on the nature of the error you may
decide to recycle the amphora or take other actions. As there should never
be a case where you deploy a controller that generates configs with
features that no amphora image can satisfy, re-deploying the amphora with
the latest image should correct this problem.

There are probably other ways to manage this that I'm not thinking of as
well-- these are just the two that occurred to me immediately.

Also, your statement above implies some process around controller upgrades
which hasn't been actually decided yet. It may be that we recommend a
different upgrade path for controllers.


  2.   If we render configuration files on the VM we only support one
 upgrade model (replacing the VM) which might simplify development as
 opposed to the driver model where we need to write code to push out
 configuration changes to all VMs for minor changes + write code to failover
 VMs for major changes

So, you're saying it's a *good* thing that you're forced into upgrading all
your amphoras for even minor changes because having only one upgrade path
should make the code simpler.

For large deployments, I heartily disagree.


  3.   I am afraid that half baked drivers will break the controller
 and I feel it’s easier to shoot VMs with half baked renderers  than the
 controllers.


I defer to Doug's statement on this, and will add the following:

Breaking a controller temporarily does not cause a visible service
interruption for end-users. Amphorae keep processing load-balancer
requests. All it means is that tenants can't make changes to existing load
balanced services until the controllers are repaired.

But blowing away an amphora does create a visible service interruption for
end-users. This is especially bad if you don't notice this until after
you've gone through and updated your fleet of 10,000+ amphorae because your
upgrade process requires you to do so.

Given the choice of scrambling to repair a few hundred broken controllers
while almost all end-users are oblivious to the problem, or scrambling to
repair 10's of thousands of amphorae while service stops for almost all
end-users, I'll take the former.  (The former is a relatively minor note on
a service status page. The latter is an article about your cloud outage on
major tech blogs and a damage-control press-release from your PR
department.)


  4.   The main advantage by using an Octavia format to talk to VMs is
 that we can mix and match VMs with different properties (e.g. nginx,
 haproxy) on the same controller because the implementation 

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Eichberger, German
Hi Stephen,

I think this is a good discussion to have and will make it more clear why we 
chose a specific design. I also believe by having this discussion we will make 
the design stronger.  I am still a little bit confused what the 
driver/controller/amphora agent roles are. In my driver-less design we don’t 
have to worry about the driver which most likely in haproxy’s case will be 
split to some degree between controller and amphora device.

So let’s try to sum up what we want a controller to do:

-  Provision new amphora devices

-  Monitor/Manage health

-  Gather stats

-  Manage/Perform configuration changes

The driver as described would be:

-  Render configuration changes in a specific format, e.g. haproxy

Amphora Device:

-  Communicate with the driver/controller to make things happen

So as Doug pointed out I can make a very thin driver which basically passes 
everything through to the Amphora Device or on the other hand of the spectrum I 
can make a very thick driver which manages all aspects from the amphora life 
cycle to whatever (aka kitchen sink). I know we are going for uttermost 
flexibility but I believe:

-  With building an haproxy centric controller we don’t really know 
which things should be controller/which thing should be driver. So my shortcut 
is not to build a driver at all ☺

-  The more flexibility increases complexity and makes it confusing for 
people to develop components. Should this concern go into the controller, the 
driver, or the amphora VM? Two of them? Three of them? Limiting choices makes 
it simpler to achieve that.

HPs worry is that by creating the potential to run multiple (version of 
drivers) drivers, on multiple versions of controllers, on multiple versions of 
amphora devices creates a headache for testing. For example does the version 
4.1 haproxy driver work with the cersion 4.2 controller on an 4.0 amphora 
device? Which compatibility matrix do we need to build/test? Limiting one 
driver to one controller can help with making that manageable.

Thanks,
German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, September 05, 2014 10:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Question about where to render haproxy 
configurations

Hi German,

Thanks for your reply! My responses are in-line below, and of course you should 
feel free to counter my counter-points. :)

For anyone else paying attention and interested in expressing a voice here, 
we'll probably be voting on this subject at next week's Octavia meeting.

On Thu, Sep 4, 2014 at 9:13 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi,

Stephen visited us today (the joy of spending some days in Seattle☺) and we 
discussed that  further (and sorry for using VM – not sure what won):

Looks like Amphora won, so I'll start using that terminology below.


1.   We will only support one driver per controller, e.g. if you upgrade a 
driver you deploy a new controller with the new driver and either make him take 
over existing VMs (minor change) or spin  up new ones (major change) but keep 
the “old” controller in place until it doesn’t serve any VMs any longer
Why? I agree with the idea of one back-end type per driver, but why shouldn't 
we support more than one driver per controller?

I agree that you probably only want to support one version of each driver per 
controller, but it seems to me it shouldn't be that difficult to write a driver 
that knows how to speak different versions of back-end amphorae. Off the top of 
my head I can think of two ways of doing this:

1. For each new feature or bugfix added, keep track of the minimal version of 
the amphora required to use that feature/bugfix. Then, when building your 
configuration, as various features are activated in the configuration, keep a 
running track of the minimal amphora version required to meet that 
configuration. If the configuration version is higher than the version of the 
amphora you're going to update, you can pre-emptively return an error detailing 
an unsupported configuration due to the back-end amphora being too old. (What 
you do with this error-- fail, recycle the amphora, whatever-- is up to the 
operator's policy at this point, though I would probably recommend just 
recycling the amphora.) If a given user's configuration never makes use of 
advanced features later on, there's no rush to upgrade their amphoras, and new 
controllers can push configs that work with the old amphoras indefinitely.

2. If the above sounds too complicated, you can forego that and simply build 
the config, try to push it to the amphora, and see if you get an error 
returned.  If you do, depending on the nature of the error you may decide to 
recycle the amphora or take other actions. As there should never be a case 
where you deploy a controller that generates

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Stephen Balukoff
Hi German,

Responses in-line:


On Fri, Sep 5, 2014 at 2:31 PM, Eichberger, German german.eichber...@hp.com
 wrote:

  Hi Stephen,



 I think this is a good discussion to have and will make it more clear why
 we chose a specific design. I also believe by having this discussion we
 will make the design stronger.  I am still a little bit confused what the
 driver/controller/amphora agent roles are. In my driver-less design we
 don’t have to worry about the driver which most likely in haproxy’s case
 will be split to some degree between controller and amphora device.


Yep, I agree that a good technical debate like this can help both to get
many people's points of view and can help determine the technical merit of
one design over another. I appreciate your vigorous participation in this
process. :)

So, the purpose of the controller / driver / amphora and the
responsibilities they have are somewhat laid out in the Octavia v0.5
component design document, but it's also possible that there weren't enough
specifics in that document to answer the concerns brought up in this
thread. So, to that end in my mind, I see things like the following:

The controller:
* Is responsible for concerns of the Octavia system as a whole, including
the intelligence around interfacing with the networking, virtualization,
and other layers necessary to set up the amphorae on the network and
getting them configured.
* Will rarely, if ever, talk directly to the end-systems or -services (like
Neutron, Nova, etc.). Instead it goes through a clean driver interface
for each of these.
* The controller has direct access to the database where state is stored.
* Must load at least one driver, may load several drivers and choose
between them based on configuration logic (ex. flavors, config file, etc.)

The driver:
* Handles all communication to or from the amphorae
* Is loaded by the controller (ie. its interface with the controller is a
base class, associated methods, etc. It's objects and code, not a RESTful
API.)
* Speaks amphora-specific protocols on the back-end. In the case of the
reference haproxy amphora, this will most likely be in the form of a
RESTful API with an agent on the amp, as well as (probably) HMAC-signed UDP
health, status and stats messages from the amp to the driver.

The amphora:
* Does the actual load balancing
* Is managed by the controller through the driver.
* Should be as dumb as possible.
* Comes in different types, based on the software in the amphora image.
(Though all amps of a given type should be managed by the same driver.)
Types might include haproxy, nginx, haproxy + nginx, 3rd party
vendor X, etc.
* Should never have direct access to the Octavia database, and therefore
attempt to be as stateless as possible, as far as configuration is
concerned.

To be honest, our current product does not have a driver layer per se,
since we only interface with one type of back-end. However, we still render
our haproxy configs in the controller. :)




 So let’s try to sum up what we want a controller to do:

 -  Provision new amphora devices

 -  Monitor/Manage health

 -  Gather stats

 -  Manage/Perform configuration changes



 The driver as described would be:

 -  Render configuration changes in a specific format, e.g. haproxy



 Amphora Device:

 -  Communicate with the driver/controller to make things happen



 So as Doug pointed out I can make a very thin driver which basically
 passes everything through to the Amphora Device or on the other hand of the
 spectrum I can make a very thick driver which manages all aspects from the
 amphora life cycle to whatever (aka kitchen sink). I know we are going for
 uttermost flexibility but I believe:


So, I'm not sure it's fair to characterize the driver I'm suggesting as
very thick. If you get right down to it, I'm pretty sure the only major
thing we disagree on here is where the haproxy configuration is rendered:
 Just before it's sent over the wire to the amphora, or just after it's
JSON-equivalent is received over the wire from the controller.


  -  With building an haproxy centric controller we don’t really
 know which things should be controller/which thing should be driver. So my
 shortcut is not to build a driver at all J

So, I've become more convinced that having a driver layer there is going to
be important if we want to support 3rd party vendors creating their own
amphorae at all (which I think we do). It's also going to be important if
we want to be able to support other versions of open-source amphorae (or
experimental versions prior to pushing out to a wider user-base, etc.)

Also, I think: Making ourselves use a driver here also helps keep
interfaces clean. This helps us avoid spaghetti code and makes things more
maintainable in the long run.

  -  The more flexibility increases complexity and makes it
 confusing for people to develop components. Should this concern go into the
 

Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-04 Thread Doug Wiegley
Hi all,

As the resident non-octavia-VM person, here are my two pennies.

All problems in computer science can be solved by another level of indirection”

That’s all the driver layer is.


 1.   We will only support one driver per controller, e.g. if you upgrade 
 a driver you deploy a new controller with the new driver and either make him 
 take over existing VMs (minor change) or spin  up new ones (major change) but 
 keep the “old” controller in place until it doesn’t serve any VMs any longer

Err, what?


 3.   I am afraid that half baked drivers will break the controller and I 
 feel it’s easier to shoot VMs with half baked renderers  than the controllers.

You think the controller can’t just try: wrap the driver calls?  Further, why 
are we designing assuming that fundamental components will be so awful so that 
they never deserved to exist?


 4.   The main advantage by using an Octavia format to talk to VMs is that 
 we can mix and match VMs with different properties (e.g. nginx, haproxy) on 
 the same controller because the implementation detail (which file to render) 
 is hidden

This advantage exists with drivers, you just negated it with point #1, and then 
use that as an advantage for not having drivers?


 5.   The major difference in The API between Stephen and me would be that 
 I would send json files which get rendered on the VM into a haproxy file 
 whereas he would send an haproxy file. We still need to develop an interface 
 on the VM to report stats and health in Octavia format. It is conceivable 
 with Stephen’s design that drivers would exist which would translate stats 
 and health from a proprietary format into the Octavia one. I am not sure how 
 we would get the proprietary VMs to emit the UDP health packets… In any case 
 a lot of logic could end up in a driver – and fanning that processing out to 
 the VMs might allow for less controllers.

That’s the driver author’s problem, and I can think of 3 ways to do the UDP 
heartbeat outside the VM just off the top of my head.

In the driver scheme, you can write a pass-through driver that sends json to 
the VM (I.e. Effectively, your proposal.)  In your proposal, with the “custom 
controllers”, the controller becomes the driver.  Are you telling me that you 
really see no commonality in the controller, to the point where it’s worth 
writing a new one for every backend implementation that might be used?

Even if we sent json all the way to the backend vm, I’d still want to see a 
driver interface in the middle, to help separate concerns over time.

Thanks,
doug



From: Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, September 4, 2014 at 10:13 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Octavia] Question about where to render haproxy 
configurations

Hi,

Stephen visited us today (the joy of spending some days in Seattle☺) and we 
discussed that  further (and sorry for using VM – not sure what won):

1.   We will only support one driver per controller, e.g. if you upgrade a 
driver you deploy a new controller with the new driver and either make him take 
over existing VMs (minor change) or spin  up new ones (major change) but keep 
the “old” controller in place until it doesn’t serve any VMs any longer

2.   If we render configuration files on the VM we only support one upgrade 
model (replacing the VM) which might simplify development as opposed to the 
driver model where we need to write code to push out configuration changes to 
all VMs for minor changes + write code to failover VMs for major changes

3.   I am afraid that half baked drivers will break the controller and I 
feel it’s easier to shoot VMs with half baked renderers  than the controllers.

4.   The main advantage by using an Octavia format to talk to VMs is that 
we can mix and match VMs with different properties (e.g. nginx, haproxy) on the 
same controller because the implementation detail (which file to render) is 
hidden

5.   The major difference in The API between Stephen and me would be that I 
would send json files which get rendered on the VM into a haproxy file whereas 
he would send an haproxy file. We still need to develop an interface on the VM 
to report stats and health in Octavia format. It is conceivable with Stephen’s 
design that drivers would exist which would translate stats and health from a 
proprietary format into the Octavia one. I am not sure how we would get the 
proprietary VMs to emit the UDP health packets… In any case a lot of logic 
could end up in a driver – and fanning that processing out to the VMs might 
allow for less controllers.

Overall, if I don’t like to take advantage