Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-10 Thread Steven Hardy
On Wed, Jul 09, 2014 at 10:33:26PM +, Randall Burt wrote:
 On Jul 9, 2014, at 4:38 PM, Zane Bitter zbit...@redhat.com
  wrote:
  On 08/07/14 17:17, Steven Hardy wrote:
  
  Regarding forcing deployers to make a one-time decision, I have a question
  re cost (money and performance) of the Swift approach vs just hitting the
  Heat API
  
  - If folks use the Swift resource and it stores data associated with the
signal in Swift, does that incurr cost to the user in a public cloud
scenario?
  
  Good question. I believe the way WaitConditions work in AWS is that it sets 
  up a pre-signed URL in a bucket owned by CloudFormation. If we went with 
  that approach we would probably want some sort of quota, I imagine.
 
 Just to clarify, you suggest that the swift-based signal mechanism use 
 containers that Heat owns rather than ones owned by the user?

I guess it's probably best to just make this configurable, so the swift
data can be either owned by the stack owner (same as all other resources,
probably the default), or put in a container owned by the heat service
user.

  The other approach is to set up a new container, owned by the user, every 
  time. In that case, a provider selecting this implementation would need to 
  make it clear to customers if they would be billed for a WaitCondition 
  resource. I'd prefer to avoid this scenario though (regardless of the 
  plug-point).
 
 Why? If we won't let the user choose, then why wouldn't we let the provider 
 make this choice? I don't think its wise of us to make decisions based on 
 what a theoretical operator may theoretically do. If the same theoretical 
 provider were to also charge users to create a trust, would we then be 
 concerned about that implementation as well? What if said provider decides 
 charges the user per resource in a stack regardless of what they are? Having 
 Heat own the container(s) as suggested above doesn't preclude that operator 
 from charging the stack owner for those either.
 
 While I agree that these examples are totally silly, I'm just trying to 
 illustrate that we shouldn't deny an operator an option so long as its 
 understood what that option entails from a technical/usage perspective.

I don't really get why this question is totally silly - I made a genuine
request for education based on near-zero knowledge of public cloud provider
pricing models.

The reason for the question was simply that we're discussing the two
alternate WaitCondition implementations which may, possibly, have very
different implications from a cost perspective.

I'm not saying we shouldn't let the provider make that decision on behalf
of the user, just pointing out that if their particular use-case demands
sending a gadzillion signals, they might like the option to choose the
lightweight API signal approach.

  - What sort of overhead are we adding, with the signals going to swift,
then in the current implementation being copied back into the heat DB[1]?
  
  I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
  don't think it's a big overhead, though.
 
 In the current implementation, I think it is minor as well, just a few extra 
 Swift API calls which should be pretty minor overhead considering the stack 
 as a whole. Plus, it minimizes the above concern around potentially costly 
 user containers in that it gets rid of them as soon as its done.
 
  It seems to me at the moment that the swift notification method is good if
  you have significant data associated with the signals, but there are
  advantages to the simple API signal approach I've been working on when you
  just need a simple one shot low overhead way to get data back from an
  instance.
  
  FWIW, the reason I revived these patches was I found that
  SoftwareDeployments did not meet my needs for a really simple signalling
  mechanism when writing tempest tests:
  
  https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml
  
  These tests currently use the AWS WaitCondition resources, and I wanted a
  native alternative, without the complexity of using SoftwareDeployments
  (which also won't work with minimal cirros images without some pretty hacky
  workarounds[2])
  
  Yep, I am all for this. I think that Swift is the best way when we have it, 
  but not every cloud has Swift (and the latest rumours from DefCore are that 
  it's likely to stay that way), so we need operators ( developers!) to be 
  able to plug in an alternative implementation.
 
 Very true, but not every cloud has trusts either. Many may have trusts, but 
 they don't employ the EC2 extensions to Keystone and therefore can't use the 
 native signals either (as I understand them anyway). Point being that 
 either way, we already impose requirements on a cloud you want to run Heat 
 against. I think it in our interest to make the effort to provide choices 
 with obvious trade-offs.

The whole point of the resources I've 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-10 Thread Zane Bitter

On 10/07/14 05:34, Steven Hardy wrote:

 The other approach is to set up a new container, owned by the user, every 
time. In that case, a provider selecting this implementation would need to make it 
clear to customers if they would be billed for a WaitCondition resource. I'd prefer 
to avoid this scenario though (regardless of the plug-point).


Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what a 
theoretical operator may theoretically do. If the same theoretical provider were 
to also charge users to create a trust, would we then be concerned about that 
implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the container(s) 
as suggested above doesn't preclude that operator from charging the stack owner 
for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its understood 
what that option entails from a technical/usage perspective.

I don't really get why this question is totally silly - I made a genuine
request for education based on near-zero knowledge of public cloud provider
pricing models.


The way I read it Randall was not saying that the question was silly, he 
was acknowledging that his own examples (like charging per-resource) 
were contrived (to the point of absurdity) to illustrate his argument.


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-10 Thread Randall Burt
On Jul 10, 2014, at 9:21 AM, Zane Bitter zbit...@redhat.com
 wrote:

 On 10/07/14 05:34, Steven Hardy wrote:
  The other approach is to set up a new container, owned by the user, 
  every time. In that case, a provider selecting this implementation 
  would need to make it clear to customers if they would be billed for a 
  WaitCondition resource. I'd prefer to avoid this scenario though 
  (regardless of the plug-point).
 
 Why? If we won't let the user choose, then why wouldn't we let the 
 provider make this choice? I don't think its wise of us to make decisions 
 based on what a theoretical operator may theoretically do. If the same 
 theoretical provider were to also charge users to create a trust, would we 
 then be concerned about that implementation as well? What if said provider 
 decides charges the user per resource in a stack regardless of what they 
 are? Having Heat own the container(s) as suggested above doesn't preclude 
 that operator from charging the stack owner for those either.
 
 While I agree that these examples are totally silly, I'm just trying to 
 illustrate that we shouldn't deny an operator an option so long as its 
 understood what that option entails from a technical/usage perspective.
 I don't really get why this question is totally silly - I made a genuine
 request for education based on near-zero knowledge of public cloud provider
 pricing models.
 
 The way I read it Randall was not saying that the question was silly, he was 
 acknowledging that his own examples (like charging per-resource) were 
 contrived (to the point of absurdity) to illustrate his argument.

Yes. I didn't mean to imply the questions or any of the responses were silly, 
only my contrived examples.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Zane Bitter

On 08/07/14 17:13, Angus Salkeld wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/07/14 09:14, Zane Bitter wrote:

I see that the new client plugins are loaded using stevedore, which is
great and IMO absolutely the right tool for that job. Thanks to Angus 
Steve B for implementing it.

Now that we have done that work, I think there are more places we can
take advantage of it too - for example, we currently have competing
native wait condition resource types being implemented by Jason[1] and
Steve H[2] respectively, and IMHO that is a mistake. We should have
*one* native wait condition resource type, one AWS-compatible one,
software deployments and any custom plugin that require signalling; and
they should all use a common SignalResponder implementation that would
call an API that is pluggable using stevedore. (In summary, what we're


what's wrong with using the environment for that? Just have two resources
and you do something like this:
https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7


It doesn't cover other things that need signals, like software 
deployments (third-party plugin authors are also on their own). We only 
want n implementations not n*(number of resources that use signals) 
implementations.



trying to make configurable is an implementation that should be
invisible to the user, not an interface that is visible to the user, and
therefore the correct unit of abstraction is an API, not a resource.)



Totally depends if we want this to be operator configurable (config file or 
plugin)
or end user configurable (use their environment to choose the implementation).



I just noticed, however, that there is an already-partially-implemented
blueprint[3] and further pending patches[4] to use stevedore for *all*
types of plugins - particularly resource plugins[5] - in Heat. I feel
very strongly that stevedore is _not_ a good fit for all of those use
cases. (Disclaimer: obviously I _would_ think that, since I implemented
the current system instead of using stevedore for precisely that reason.)


haha.



The stated benefit of switching to stevedore is that it solves issues
like https://launchpad.net/bugs/1292655 that are caused by the current
convoluted layout of /contrib. I think the layout stems at least in part


I think another great reason is consistency with how all other plugins are 
openstack
are written (stevedore).


Sure, consistency is nice, sometimes even at the expense of being not 
quite the right tool for the job. But there are limits to that trade-off.



Also I *really* don't think we should optimize for our contrib plugins
but for:
1) our built in plugins
2) out of tree plugins


I completely agree, which is why I was surprised by this change. It 
seems to be deprecating a system that is working well for built-in and 
out-of-tree plugins in order to make minor improvements to how we handle 
contrib.



from a misunderstanding of how the current plugin_manager works. The
point of the plugin_manager is that each plugin directory does *not*
have to be a Python package - it can be any directory. Modules in the
directory then appear in the package heat.engine.plugins once imported.
So there is no need to do what we are currently doing, creating a
resources package, and then a parent package that contains the tests
package as well, and then in the tests doing:

from ..resources import docker_container  ## noqa

All we really need to do is throw the resources in any old directory,
add that directory to the plugin_dirs list, stick the tests in any old
package, and from the tests do

from heat.engine.plugins import docker_container

The main reason we haven't done this seems to be to avoid having to list
the various contrib plugin dirs separately. Stevedore solves this by
forcing us to list not only each directory but each class in each module
in each directory separately. The tricky part of fixing the current
layout is ensuring the contrib plugin directories get added to the
plugin_dirs list during the unit tests and only during the unit tests.
However, I'm confident that could be fixed with no more difficulty than
the stevedore changes and with far less disruption to existing operators
using custom plugins.

Stevedore is ideal for configuring an implementation for a small number
of well known plug points. It does not appear to be ideal for managing
an application like Heat that comprises a vast collection of
implementations of the same interface, each bound to its own plug point.


I wouldn't call our resources vast.


I count 73 in your patch, not including contrib and assuming you didn't 
miss any ;). It's seems clear to me that we're well past the point of 
what the Extensions API was designed for. When everything is an 
extension you need different tools to manage it. Quantity has a quality 
all it's own ;)



Really, I think it works great.


As discussed on IRC yesterday, we could potentially make the plugin 
the existing 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Zane Bitter

I think my reply to Angus covers most of your points, except this one:

On 08/07/14 17:39, Steve Baker wrote:

On 09/07/14 07:08, Zane Bitter wrote:

Constraints, I feel, are very similar to resources in this respect. I
am less concerned about template formats, since there are so few of
them... although it would be really nice to be able to install these
as subpackages too, and using stevedore appears to eliminate that as
an option :(


To me constraints are more like client plugins. They make API calls and
need specific knowledge of client exceptions. That is why I have been
moving them into the client plugin modules. It would be preferable if
they used the same plugin loading mechanism too.


OK, cool, I'm persuaded by that. So they are like client plugin plugins? 
Should we consider doing something like with the intrinsic functions, 
where we just tie them to the client plugin?


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Randall Burt
On Jul 9, 2014, at 3:15 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 08/07/14 17:13, Angus Salkeld wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 08/07/14 09:14, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is
 great and IMO absolutely the right tool for that job. Thanks to Angus 
 Steve B for implementing it.
 
 Now that we have done that work, I think there are more places we can
 take advantage of it too - for example, we currently have competing
 native wait condition resource types being implemented by Jason[1] and
 Steve H[2] respectively, and IMHO that is a mistake. We should have
 *one* native wait condition resource type, one AWS-compatible one,
 software deployments and any custom plugin that require signalling; and
 they should all use a common SignalResponder implementation that would
 call an API that is pluggable using stevedore. (In summary, what we're
 
 what's wrong with using the environment for that? Just have two resources
 and you do something like this:
 https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7
 
 It doesn't cover other things that need signals, like software deployments 
 (third-party plugin authors are also on their own). We only want n 
 implementations not n*(number of resources that use signals) implementations.
 
 trying to make configurable is an implementation that should be
 invisible to the user, not an interface that is visible to the user, and
 therefore the correct unit of abstraction is an API, not a resource.)
 
 
 Totally depends if we want this to be operator configurable (config file or 
 plugin)
 or end user configurable (use their environment to choose the 
 implementation).
 
 
 I just noticed, however, that there is an already-partially-implemented
 blueprint[3] and further pending patches[4] to use stevedore for *all*
 types of plugins - particularly resource plugins[5] - in Heat. I feel
 very strongly that stevedore is _not_ a good fit for all of those use
 cases. (Disclaimer: obviously I _would_ think that, since I implemented
 the current system instead of using stevedore for precisely that reason.)
 
 haha.
 
 
 The stated benefit of switching to stevedore is that it solves issues
 like https://launchpad.net/bugs/1292655 that are caused by the current
 convoluted layout of /contrib. I think the layout stems at least in part
 
 I think another great reason is consistency with how all other plugins are 
 openstack
 are written (stevedore).
 
 Sure, consistency is nice, sometimes even at the expense of being not quite 
 the right tool for the job. But there are limits to that trade-off.
 
 Also I *really* don't think we should optimize for our contrib plugins
 but for:
 1) our built in plugins
 2) out of tree plugins
 
 I completely agree, which is why I was surprised by this change. It seems to 
 be deprecating a system that is working well for built-in and out-of-tree 
 plugins in order to make minor improvements to how we handle contrib.

FWIW, when it comes to deploying Heat with non-built-in, there's no substantive 
difference in the experience between contrib and out-of-tree plugins, so 
neither system is more or less optimized for either. However, with the current 
system, there's no easy way to get rid of the built-in ones you don't want.

 
 from a misunderstanding of how the current plugin_manager works. The
 point of the plugin_manager is that each plugin directory does *not*
 have to be a Python package - it can be any directory. Modules in the
 directory then appear in the package heat.engine.plugins once imported.
 So there is no need to do what we are currently doing, creating a
 resources package, and then a parent package that contains the tests
 package as well, and then in the tests doing:
 
from ..resources import docker_container  ## noqa
 
 All we really need to do is throw the resources in any old directory,
 add that directory to the plugin_dirs list, stick the tests in any old
 package, and from the tests do
 
from heat.engine.plugins import docker_container
 
 The main reason we haven't done this seems to be to avoid having to list
 the various contrib plugin dirs separately. Stevedore solves this by
 forcing us to list not only each directory but each class in each module
 in each directory separately. The tricky part of fixing the current
 layout is ensuring the contrib plugin directories get added to the
 plugin_dirs list during the unit tests and only during the unit tests.
 However, I'm confident that could be fixed with no more difficulty than
 the stevedore changes and with far less disruption to existing operators
 using custom plugins.
 
 Stevedore is ideal for configuring an implementation for a small number
 of well known plug points. It does not appear to be ideal for managing
 an application like Heat that comprises a vast collection of
 implementations of the same interface, each bound to its own plug point.
 
 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/14 10:17, Zane Bitter wrote:
 On 08/07/14 17:13, Angus Salkeld wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 08/07/14 09:14, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is
 great and IMO absolutely the right tool for that job. Thanks to Angus 
 Steve B for implementing it.

 Now that we have done that work, I think there are more places we can
 take advantage of it too - for example, we currently have competing
 native wait condition resource types being implemented by Jason[1] and
 Steve H[2] respectively, and IMHO that is a mistake. We should have
 *one* native wait condition resource type, one AWS-compatible one,
 software deployments and any custom plugin that require signalling; and
 they should all use a common SignalResponder implementation that would
 call an API that is pluggable using stevedore. (In summary, what we're

 what's wrong with using the environment for that? Just have two resources
 and you do something like this:
 https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7
 
 It doesn't cover other things that need signals, like software 
 deployments (third-party plugin authors are also on their own). We only 
 want n implementations not n*(number of resources that use signals) 
 implementations.
 
 trying to make configurable is an implementation that should be
 invisible to the user, not an interface that is visible to the user, and
 therefore the correct unit of abstraction is an API, not a resource.)


 Totally depends if we want this to be operator configurable (config file or 
 plugin)
 or end user configurable (use their environment to choose the 
 implementation).


 I just noticed, however, that there is an already-partially-implemented
 blueprint[3] and further pending patches[4] to use stevedore for *all*
 types of plugins - particularly resource plugins[5] - in Heat. I feel
 very strongly that stevedore is _not_ a good fit for all of those use
 cases. (Disclaimer: obviously I _would_ think that, since I implemented
 the current system instead of using stevedore for precisely that reason.)

 haha.


 The stated benefit of switching to stevedore is that it solves issues
 like https://launchpad.net/bugs/1292655 that are caused by the current
 convoluted layout of /contrib. I think the layout stems at least in part

 I think another great reason is consistency with how all other plugins are 
 openstack
 are written (stevedore).
 
 Sure, consistency is nice, sometimes even at the expense of being not 
 quite the right tool for the job. But there are limits to that trade-off.
 
 Also I *really* don't think we should optimize for our contrib plugins
 but for:
 1) our built in plugins
 2) out of tree plugins
 
 I completely agree, which is why I was surprised by this change. It 
 seems to be deprecating a system that is working well for built-in and 
 out-of-tree plugins in order to make minor improvements to how we handle 
 contrib.
 
 from a misunderstanding of how the current plugin_manager works. The
 point of the plugin_manager is that each plugin directory does *not*
 have to be a Python package - it can be any directory. Modules in the
 directory then appear in the package heat.engine.plugins once imported.
 So there is no need to do what we are currently doing, creating a
 resources package, and then a parent package that contains the tests
 package as well, and then in the tests doing:

 from ..resources import docker_container  ## noqa

 All we really need to do is throw the resources in any old directory,
 add that directory to the plugin_dirs list, stick the tests in any old
 package, and from the tests do

 from heat.engine.plugins import docker_container

 The main reason we haven't done this seems to be to avoid having to list
 the various contrib plugin dirs separately. Stevedore solves this by
 forcing us to list not only each directory but each class in each module
 in each directory separately. The tricky part of fixing the current
 layout is ensuring the contrib plugin directories get added to the
 plugin_dirs list during the unit tests and only during the unit tests.
 However, I'm confident that could be fixed with no more difficulty than
 the stevedore changes and with far less disruption to existing operators
 using custom plugins.

 Stevedore is ideal for configuring an implementation for a small number
 of well known plug points. It does not appear to be ideal for managing
 an application like Heat that comprises a vast collection of
 implementations of the same interface, each bound to its own plug point.

 I wouldn't call our resources vast.
 
 I count 73 in your patch, not including contrib and assuming you didn't 
 miss any ;). It's seems clear to me that we're well past the point of 
 what the Extensions API was designed for. When everything is an 
 extension you need different tools to manage it. 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/07/14 11:03, Randall Burt wrote:
 On Jul 9, 2014, at 3:15 PM, Zane Bitter zbit...@redhat.com
  wrote:
 
 On 08/07/14 17:13, Angus Salkeld wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 08/07/14 09:14, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is
 great and IMO absolutely the right tool for that job. Thanks to Angus 
 Steve B for implementing it.

 Now that we have done that work, I think there are more places we can
 take advantage of it too - for example, we currently have competing
 native wait condition resource types being implemented by Jason[1] and
 Steve H[2] respectively, and IMHO that is a mistake. We should have
 *one* native wait condition resource type, one AWS-compatible one,
 software deployments and any custom plugin that require signalling; and
 they should all use a common SignalResponder implementation that would
 call an API that is pluggable using stevedore. (In summary, what we're

 what's wrong with using the environment for that? Just have two resources
 and you do something like this:
 https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7

 It doesn't cover other things that need signals, like software deployments 
 (third-party plugin authors are also on their own). We only want n 
 implementations not n*(number of resources that use signals) implementations.

 trying to make configurable is an implementation that should be
 invisible to the user, not an interface that is visible to the user, and
 therefore the correct unit of abstraction is an API, not a resource.)


 Totally depends if we want this to be operator configurable (config file or 
 plugin)
 or end user configurable (use their environment to choose the 
 implementation).


 I just noticed, however, that there is an already-partially-implemented
 blueprint[3] and further pending patches[4] to use stevedore for *all*
 types of plugins - particularly resource plugins[5] - in Heat. I feel
 very strongly that stevedore is _not_ a good fit for all of those use
 cases. (Disclaimer: obviously I _would_ think that, since I implemented
 the current system instead of using stevedore for precisely that reason.)

 haha.


 The stated benefit of switching to stevedore is that it solves issues
 like https://launchpad.net/bugs/1292655 that are caused by the current
 convoluted layout of /contrib. I think the layout stems at least in part

 I think another great reason is consistency with how all other plugins are 
 openstack
 are written (stevedore).

 Sure, consistency is nice, sometimes even at the expense of being not quite 
 the right tool for the job. But there are limits to that trade-off.

 Also I *really* don't think we should optimize for our contrib plugins
 but for:
 1) our built in plugins
 2) out of tree plugins

 I completely agree, which is why I was surprised by this change. It seems to 
 be deprecating a system that is working well for built-in and out-of-tree 
 plugins in order to make minor improvements to how we handle contrib.
 
 FWIW, when it comes to deploying Heat with non-built-in, there's no 
 substantive difference in the experience between contrib and out-of-tree 
 plugins, so neither system is more or less optimized for either. However, 
 with the current system, there's no easy way to get rid of the built-in 
 ones you don't want.

drop a file in /etc/environment.d/ that has this in:

 OS::one_I_dont_want:

You are saying that the resource should have no implementation.

This disables the resource here.
https://github.com/openstack/heat/blob/master/heat/engine/environment.py#L186-L202

- -A

 

 from a misunderstanding of how the current plugin_manager works. The
 point of the plugin_manager is that each plugin directory does *not*
 have to be a Python package - it can be any directory. Modules in the
 directory then appear in the package heat.engine.plugins once imported.
 So there is no need to do what we are currently doing, creating a
 resources package, and then a parent package that contains the tests
 package as well, and then in the tests doing:

from ..resources import docker_container  ## noqa

 All we really need to do is throw the resources in any old directory,
 add that directory to the plugin_dirs list, stick the tests in any old
 package, and from the tests do

from heat.engine.plugins import docker_container

 The main reason we haven't done this seems to be to avoid having to list
 the various contrib plugin dirs separately. Stevedore solves this by
 forcing us to list not only each directory but each class in each module
 in each directory separately. The tricky part of fixing the current
 layout is ensuring the contrib plugin directories get added to the
 plugin_dirs list during the unit tests and only during the unit tests.
 However, I'm confident that could be fixed with no more difficulty than
 the stevedore changes and with far less 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Zane Bitter

On 08/07/14 17:17, Steven Hardy wrote:

On Tue, Jul 08, 2014 at 03:08:32PM -0400, Zane Bitter wrote:

I see that the new client plugins are loaded using stevedore, which is great
and IMO absolutely the right tool for that job. Thanks to Angus  Steve B
for implementing it.

Now that we have done that work, I think there are more places we can take
advantage of it too - for example, we currently have competing native wait
condition resource types being implemented by Jason[1] and Steve H[2]
respectively, and IMHO that is a mistake. We should have *one* native wait
condition resource type, one AWS-compatible one, software deployments and
any custom plugin that require signalling; and they should all use a common
SignalResponder implementation that would call an API that is pluggable
using stevedore. (In summary, what we're trying to make configurable is an
implementation that should be invisible to the user, not an interface that
is visible to the user, and therefore the correct unit of abstraction is an
API, not a resource.)


To clarify, they're not competing as such - Jason and I have chatted about
the two approaches and have been working to maintain a common interface,
such that they would be easily substituted based on deployer or user
preferences.


Yes, poor choice of words on my part :)


My initial assumption was that this substitution would happen via resource
mappings in the global environment, but I now see that you are proposing
the configurable part to be at a lower level, subsituting the transport
behind a common resource implementation.


Yeah, exactly.


Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?


Good question. I believe the way WaitConditions work in AWS is that it 
sets up a pre-signed URL in a bucket owned by CloudFormation. If we went 
with that approach we would probably want some sort of quota, I imagine.


The other approach is to set up a new container, owned by the user, 
every time. In that case, a provider selecting this implementation would 
need to make it clear to customers if they would be billed for a 
WaitCondition resource. I'd prefer to avoid this scenario though 
(regardless of the plug-point).



- What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?


I wasn't aware we were doing that, and I'm a bit unsure about it myself. 
I don't think it's a big overhead, though.



It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple one shot low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])


Yep, I am all for this. I think that Swift is the best way when we have 
it, but not every cloud has Swift (and the latest rumours from DefCore 
are that it's likely to stay that way), so we need operators ( 
developers!) to be able to plug in an alternative implementation.



I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.


Agree, this is an important question to ask. The downside to leaving the 
choice to the user is that it reduces interoperability between clouds. 
(In fact, it's unclear whether operators _would_ give users a choice, or 
just deploy one implementation anyway.) It's not insurmountable (thanks 
to environments), but it does add friction to the ecosystem so we have 
to weigh up the trade-offs.


cheers,
Zane.


[1] https://review.openstack.org/#/c/96947/
[2] https://review.openstack.org/#/c/91475/

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Zane Bitter

On 09/07/14 17:10, Angus Salkeld wrote:

If we could make them separate Python packages within a single Git repo,
I would be +2 on that. I don't know if that's feasible with our current
tooling (though I guess it's not dissimilar to what you're doing with
the contrib stuff in this patch series?).


Another option is we could move away from contrib and have:
heat/resources/builtin/all native resources here
heat/resources/optional/{aws,contrib}

The ones under optional have their own setup.cfg and need to be packaged
(by the distro) separately.


I'm not familiar enough with Python packaging to know if that works, but 
I'm fine with it if it does ;) Given the choice I'd probably marginally 
prefer to move the plugins to a separate tree (i.e. keep them in the 
same repo but not have them under heat/) though.



We can default to resources are loaded but not enabled by default
the each distro package can drop a file into/etc/environment.d/  that
enables it's resources.


Sounds like a recipe for distro bugs.


What I was getting at was, we should separate the mechanism for loading the
plugins and what is enabled/visible to the user. And that logic should
really live in the resource registry in the environment.


+1


One of the main design goals of the current resource plugins was to move
the mapping from resource names to classes away from one central point
(where all of the modules were imported) and place the configuration
alongside the code it applies to. I am definitely not looking forward to
having to go look up a config file to find out what each resource is
every time I open the autoscaling module (and I do need to remind myself
_every_  time I open it), to say nothing of the constant merge conflicts
that we used to have to deal with when there was a central registry.


They are grouped by name, so you will only run into an issue when someone
else is working on the same area as you.



A central registry is also problematic for operators that modify it, who
will have a difficult, manual and potentially error-prone merge task to
perform on the config file every time they upgrade.


I don't see why an operator will be editing this, they should be using
the environment to disable plugins/rename things. You don't have to
touch this if you are adding your own plugin.



Constraints, I feel, are very similar to resources in this respect. I am
less concerned about template formats, since there are so few of them...
although it would be really nice to be able to install these as
subpackages too, and using stevedore appears to eliminate that as an
option :(

Do we want to move constraints to Hooks as well? Guessing yes, to make
it consistent.


I'm not too familiar with the constraints stuff, but I think yes - I 
doubt we want to be listing all of these in setup.cfg either. We can 
probably have one hook plugin for each client plugin to cover the 
built-in stuff, while still allowing out-of-tree plugins to join at the 
same hook point.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Randall Burt
On Jul 9, 2014, at 4:38 PM, Zane Bitter zbit...@redhat.com
 wrote:
 On 08/07/14 17:17, Steven Hardy wrote:
 
 Regarding forcing deployers to make a one-time decision, I have a question
 re cost (money and performance) of the Swift approach vs just hitting the
 Heat API
 
 - If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?
 
 Good question. I believe the way WaitConditions work in AWS is that it sets 
 up a pre-signed URL in a bucket owned by CloudFormation. If we went with that 
 approach we would probably want some sort of quota, I imagine.

Just to clarify, you suggest that the swift-based signal mechanism use 
containers that Heat owns rather than ones owned by the user?

 The other approach is to set up a new container, owned by the user, every 
 time. In that case, a provider selecting this implementation would need to 
 make it clear to customers if they would be billed for a WaitCondition 
 resource. I'd prefer to avoid this scenario though (regardless of the 
 plug-point).

Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what 
a theoretical operator may theoretically do. If the same theoretical provider 
were to also charge users to create a trust, would we then be concerned about 
that implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the 
container(s) as suggested above doesn't preclude that operator from charging 
the stack owner for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its 
understood what that option entails from a technical/usage perspective.

 - What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?
 
 I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
 don't think it's a big overhead, though.

In the current implementation, I think it is minor as well, just a few extra 
Swift API calls which should be pretty minor overhead considering the stack as 
a whole. Plus, it minimizes the above concern around potentially costly user 
containers in that it gets rid of them as soon as its done.

 It seems to me at the moment that the swift notification method is good if
 you have significant data associated with the signals, but there are
 advantages to the simple API signal approach I've been working on when you
 just need a simple one shot low overhead way to get data back from an
 instance.
 
 FWIW, the reason I revived these patches was I found that
 SoftwareDeployments did not meet my needs for a really simple signalling
 mechanism when writing tempest tests:
 
 https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml
 
 These tests currently use the AWS WaitCondition resources, and I wanted a
 native alternative, without the complexity of using SoftwareDeployments
 (which also won't work with minimal cirros images without some pretty hacky
 workarounds[2])
 
 Yep, I am all for this. I think that Swift is the best way when we have it, 
 but not every cloud has Swift (and the latest rumours from DefCore are that 
 it's likely to stay that way), so we need operators ( developers!) to be 
 able to plug in an alternative implementation.

Very true, but not every cloud has trusts either. Many may have trusts, but 
they don't employ the EC2 extensions to Keystone and therefore can't use the 
native signals either (as I understand them anyway). Point being that either 
way, we already impose requirements on a cloud you want to run Heat against. I 
think it in our interest to make the effort to provide choices with obvious 
trade-offs.

 I'm all for making things simple, avoiding duplication and confusion for
 users, but I'd like to ensure that making this a one-time deployer level
 decision definitely makes sense, vs giving users some choice over what
 method is used.
 
 Agree, this is an important question to ask. The downside to leaving the 
 choice to the user is that it reduces interoperability between clouds. (In 
 fact, it's unclear whether operators _would_ give users a choice, or just 
 deploy one implementation anyway.) It's not insurmountable (thanks to 
 environments), but it does add friction to the ecosystem so we have to weigh 
 up the trade-offs.

Agreed that this is an important concern, but one of mine is that no other 
resource has selectable back-ends. The way an operator controls this today is 
via the global environment where they have the option to disable one or more of 
these resources or even alias one to the other. Seems a large change for 
something an operator already has the ability to deal with. 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Clint Byrum
Excerpts from Randall Burt's message of 2014-07-09 15:33:26 -0700:
 On Jul 9, 2014, at 4:38 PM, Zane Bitter zbit...@redhat.com
  wrote:
  On 08/07/14 17:17, Steven Hardy wrote:
  
  Regarding forcing deployers to make a one-time decision, I have a question
  re cost (money and performance) of the Swift approach vs just hitting the
  Heat API
  
  - If folks use the Swift resource and it stores data associated with the
signal in Swift, does that incurr cost to the user in a public cloud
scenario?
  
  Good question. I believe the way WaitConditions work in AWS is that it sets 
  up a pre-signed URL in a bucket owned by CloudFormation. If we went with 
  that approach we would probably want some sort of quota, I imagine.
 
 Just to clarify, you suggest that the swift-based signal mechanism use 
 containers that Heat owns rather than ones owned by the user?
 

+1, don't hide it.

  The other approach is to set up a new container, owned by the user, every 
  time. In that case, a provider selecting this implementation would need to 
  make it clear to customers if they would be billed for a WaitCondition 
  resource. I'd prefer to avoid this scenario though (regardless of the 
  plug-point).
 
 Why? If we won't let the user choose, then why wouldn't we let the provider 
 make this choice? I don't think its wise of us to make decisions based on 
 what a theoretical operator may theoretically do. If the same theoretical 
 provider were to also charge users to create a trust, would we then be 
 concerned about that implementation as well? What if said provider decides 
 charges the user per resource in a stack regardless of what they are? Having 
 Heat own the container(s) as suggested above doesn't preclude that operator 
 from charging the stack owner for those either.


This is a nice use case for preview. A user should be able to preview a
stack and know what will be consumed. Wait conditions will show a swift
container if preview is worth anything.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Zane Bitter

On 09/07/14 18:33, Randall Burt wrote:

On Jul 9, 2014, at 4:38 PM, Zane Bitter zbit...@redhat.com
  wrote:

On 08/07/14 17:17, Steven Hardy wrote:


Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?


Good question. I believe the way WaitConditions work in AWS is that it sets up 
a pre-signed URL in a bucket owned by CloudFormation. If we went with that 
approach we would probably want some sort of quota, I imagine.


Just to clarify, you suggest that the swift-based signal mechanism use 
containers that Heat owns rather than ones owned by the user?


I'm suggesting that's one possible implementation, yes.


The other approach is to set up a new container, owned by the user, every time. 
In that case, a provider selecting this implementation would need to make it 
clear to customers if they would be billed for a WaitCondition resource. I'd 
prefer to avoid this scenario though (regardless of the plug-point).


Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what 
a theoretical operator may theoretically do. If the same theoretical provider 
were to also charge users to create a trust, would we then be concerned about 
that implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the 
container(s) as suggested above doesn't preclude that operator from charging 
the stack owner for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its 
understood what that option entails from a technical/usage perspective.


Fair enough, I'm not suggesting that I want to deny the operator the 
option of charging, more that I'd prefer to avoid pushing them into a 
corner where they feel like they'd _have_ to charge.


In fact, if we adopt the plugin system I am suggesting, we could in 
theory implement _both_ of the above options ;)



- What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?


I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
don't think it's a big overhead, though.


In the current implementation, I think it is minor as well, just a few extra 
Swift API calls which should be pretty minor overhead considering the stack as 
a whole. Plus, it minimizes the above concern around potentially costly user 
containers in that it gets rid of them as soon as its done.


One of the nice things about this is that it largely negates the need 
for charging, by making the containers fairly useless for nefarious 
purposes. Unfortunately there are some problems with it too, which I've 
noted in the review.



It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple one shot low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])


Yep, I am all for this. I think that Swift is the best way when we have it, but not 
every cloud has Swift (and the latest rumours from DefCore are that it's likely to 
stay that way), so we need operators ( developers!) to be able to plug in an 
alternative implementation.


Very true, but not every cloud has trusts either. Many may have trusts, but they don't 
employ the EC2 extensions to Keystone and therefore can't use the native 
signals either (as I understand them anyway). Point being that either way, we already 
impose requirements on a cloud you want to run Heat against. I think it in our interest 
to make the effort to provide choices with obvious trade-offs.


The AWS resources require EC2 extensions; Steve is making the native 
resources so that we won't require EC2 extensions any longer. When that 
is done, we should make the AWS resources use the same implementation 
(the implementation in AWS is closer to the Swift thing that Jason is 
working on than anything we have ever done).


So part of what I'm suggesting here is that both 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 

 I wouldn't call our resources vast.
 
 I count 73 in your patch, not including contrib and assuming you didn't miss 
 any ;). It's seems
 clear to me that we're well past the point of what the Extensions API was 
 designed for. When
 everything is an extension you need different tools to manage it. Quantity 
 has a quality all
 it's own ;)

I think you might be on your own here:
https://github.com/openstack/python-openstackclient/blob/master/setup.cfg
https://github.com/openstack/nova/blob/master/setup.cfg
https://github.com/openstack/ceilometer/blob/master/setup.cfg
https://github.com/openstack/neutron/blob/master/setup.cfg

- -A


-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTve08AAoJEFrDYBLxZjWoGoMH+wV5mRUketk5iH9FCVwmwq3o
8HaidJrOpXuK3vDXyO+DPBBw2dx3cvuXDuJIT6q1UnwP66+G5xQRaz/mqr6UZDLp
WowL84ChrEcbwNJf5pMqXIpvG2sv9RZwz69MOQrwfodweTfNaVe3qBeapY1AIYvB
3f7EGuepmw8pIvQWMpVhmA/xCkrzfJ8Yj9VXwvUGYaLEADoyubNRp/EZG29nPP0/
sgdN4+Y+w+uG3AAbcXDP6M3ydiJAp7lswbOqsRQ+Lejc9UaKIZ03eis2WZQ24WzI
qrO4mTbwBKlRXBk8BlSbwF0hQe0gz58pq80iZYQKRMMa515jwhUa9hPzrm8J0SI=
=FY5b
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/07/14 09:14, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is 
 great and IMO absolutely the right tool for that job. Thanks to Angus  
 Steve B for implementing it.
 
 Now that we have done that work, I think there are more places we can 
 take advantage of it too - for example, we currently have competing 
 native wait condition resource types being implemented by Jason[1] and 
 Steve H[2] respectively, and IMHO that is a mistake. We should have 
 *one* native wait condition resource type, one AWS-compatible one, 
 software deployments and any custom plugin that require signalling; and 
 they should all use a common SignalResponder implementation that would 
 call an API that is pluggable using stevedore. (In summary, what we're 

what's wrong with using the environment for that? Just have two resources
and you do something like this:
https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7

 trying to make configurable is an implementation that should be 
 invisible to the user, not an interface that is visible to the user, and 
 therefore the correct unit of abstraction is an API, not a resource.)
 

Totally depends if we want this to be operator configurable (config file or 
plugin)
or end user configurable (use their environment to choose the implementation).

 
 I just noticed, however, that there is an already-partially-implemented 
 blueprint[3] and further pending patches[4] to use stevedore for *all* 
 types of plugins - particularly resource plugins[5] - in Heat. I feel 
 very strongly that stevedore is _not_ a good fit for all of those use 
 cases. (Disclaimer: obviously I _would_ think that, since I implemented 
 the current system instead of using stevedore for precisely that reason.)

haha.

 
 The stated benefit of switching to stevedore is that it solves issues 
 like https://launchpad.net/bugs/1292655 that are caused by the current 
 convoluted layout of /contrib. I think the layout stems at least in part 

I think another great reason is consistency with how all other plugins are 
openstack
are written (stevedore).

Also I *really* don't think we should optimize for our contrib plugins
but for:
1) our built in plugins
2) out of tree plugins


 from a misunderstanding of how the current plugin_manager works. The 
 point of the plugin_manager is that each plugin directory does *not* 
 have to be a Python package - it can be any directory. Modules in the 
 directory then appear in the package heat.engine.plugins once imported. 
 So there is no need to do what we are currently doing, creating a 
 resources package, and then a parent package that contains the tests 
 package as well, and then in the tests doing:
 
from ..resources import docker_container  ## noqa
 
 All we really need to do is throw the resources in any old directory, 
 add that directory to the plugin_dirs list, stick the tests in any old 
 package, and from the tests do
 
from heat.engine.plugins import docker_container
 
 The main reason we haven't done this seems to be to avoid having to list 
 the various contrib plugin dirs separately. Stevedore solves this by 
 forcing us to list not only each directory but each class in each module 
 in each directory separately. The tricky part of fixing the current 
 layout is ensuring the contrib plugin directories get added to the 
 plugin_dirs list during the unit tests and only during the unit tests. 
 However, I'm confident that could be fixed with no more difficulty than 
 the stevedore changes and with far less disruption to existing operators 
 using custom plugins.
 
 Stevedore is ideal for configuring an implementation for a small number 
 of well known plug points. It does not appear to be ideal for managing 
 an application like Heat that comprises a vast collection of 
 implementations of the same interface, each bound to its own plug point.
 
I wouldn't call our resources vast.

Really, I think it works great.

 For example, there's a subtle difference in how plugin_manager loads 
 external modules - by searching a list of plugin directories for Python 
 modules - and how stevedore does it, by loading a specified module 
 already in the Python path. The latter is great for selecting one of a 
 number of implementations that already exist in the code, but not so 
 great for dropping in an additional external module, which now needs to 
 be wrapped in a package that has to be installed in the path *and* 
 there's still a configuration file to edit. This is way harder for a 
 packager and/or operator to set up.

I think you have this the wrong way around.
With stevedore you don't need to edit a config file and with pluginmanager
you do if that dir isn't in the list already.

stevedore relies on namespaces, so you add your plugins into the 
heat.resources
namespace and then heat will load them (no editing of config files).
You do *not* need 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Steven Hardy
On Tue, Jul 08, 2014 at 03:08:32PM -0400, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is great
 and IMO absolutely the right tool for that job. Thanks to Angus  Steve B
 for implementing it.
 
 Now that we have done that work, I think there are more places we can take
 advantage of it too - for example, we currently have competing native wait
 condition resource types being implemented by Jason[1] and Steve H[2]
 respectively, and IMHO that is a mistake. We should have *one* native wait
 condition resource type, one AWS-compatible one, software deployments and
 any custom plugin that require signalling; and they should all use a common
 SignalResponder implementation that would call an API that is pluggable
 using stevedore. (In summary, what we're trying to make configurable is an
 implementation that should be invisible to the user, not an interface that
 is visible to the user, and therefore the correct unit of abstraction is an
 API, not a resource.)

To clarify, they're not competing as such - Jason and I have chatted about
the two approaches and have been working to maintain a common interface,
such that they would be easily substituted based on deployer or user
preferences.

My initial assumption was that this substitution would happen via resource
mappings in the global environment, but I now see that you are proposing
the configurable part to be at a lower level, subsituting the transport
behind a common resource implementation.

Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
  signal in Swift, does that incurr cost to the user in a public cloud
  scenario?
- What sort of overhead are we adding, with the signals going to swift,
  then in the current implementation being copied back into the heat DB[1]?

It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple one shot low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])

I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.

[1] https://review.openstack.org/#/c/96947/
[2] https://review.openstack.org/#/c/91475/

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-08 Thread Steve Baker
On 09/07/14 07:08, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is
 great and IMO absolutely the right tool for that job. Thanks to Angus
  Steve B for implementing it.

 Now that we have done that work, I think there are more places we can
 take advantage of it too - for example, we currently have competing
 native wait condition resource types being implemented by Jason[1] and
 Steve H[2] respectively, and IMHO that is a mistake. We should have
 *one* native wait condition resource type, one AWS-compatible one,
 software deployments and any custom plugin that require signalling;
 and they should all use a common SignalResponder implementation that
 would call an API that is pluggable using stevedore. (In summary, what
 we're trying to make configurable is an implementation that should be
 invisible to the user, not an interface that is visible to the user,
 and therefore the correct unit of abstraction is an API, not a resource.)


 I just noticed, however, that there is an
 already-partially-implemented blueprint[3] and further pending
 patches[4] to use stevedore for *all* types of plugins - particularly
 resource plugins[5] - in Heat. I feel very strongly that stevedore is
 _not_ a good fit for all of those use cases. (Disclaimer: obviously I
 _would_ think that, since I implemented the current system instead of
 using stevedore for precisely that reason.)

 The stated benefit of switching to stevedore is that it solves issues
 like https://launchpad.net/bugs/1292655 that are caused by the current
 convoluted layout of /contrib. I think the layout stems at least in
 part from a misunderstanding of how the current plugin_manager works.
 The point of the plugin_manager is that each plugin directory does
 *not* have to be a Python package - it can be any directory. Modules
 in the directory then appear in the package heat.engine.plugins once
 imported. So there is no need to do what we are currently doing,
 creating a resources package, and then a parent package that contains
 the tests package as well, and then in the tests doing:

   from ..resources import docker_container  ## noqa

 All we really need to do is throw the resources in any old directory,
 add that directory to the plugin_dirs list, stick the tests in any old
 package, and from the tests do

   from heat.engine.plugins import docker_container

 The main reason we haven't done this seems to be to avoid having to
 list the various contrib plugin dirs separately. Stevedore solves
 this by forcing us to list not only each directory but each class in
 each module in each directory separately. The tricky part of fixing
 the current layout is ensuring the contrib plugin directories get
 added to the plugin_dirs list during the unit tests and only during
 the unit tests. However, I'm confident that could be fixed with no
 more difficulty than the stevedore changes and with far less
 disruption to existing operators using custom plugins.

There is a design document for stevedore which does a good job of
covering all the options for designing a plugin system:
http://stevedore.readthedocs.org/en/latest/essays/pycon2013.html

 Stevedore is ideal for configuring an implementation for a small
 number of well known plug points. It does not appear to be ideal for
 managing an application like Heat that comprises a vast collection of
 implementations of the same interface, each bound to its own plug point.

Resource plugins seems to match stevedore's Extensions pattern
reasonably well
http://stevedore.readthedocs.org/en/latest/patterns_loading.html

 For example, there's a subtle difference in how plugin_manager loads
 external modules - by searching a list of plugin directories for
 Python modules - and how stevedore does it, by loading a specified
 module already in the Python path. The latter is great for selecting
 one of a number of implementations that already exist in the code, but
 not so great for dropping in an additional external module, which now
 needs to be wrapped in a package that has to be installed in the path
 *and* there's still a configuration file to edit. This is way harder
 for a packager and/or operator to set up.

Just dropping in a file is convenient, but maybe properly packaging
resource plugins is a discipline we should be encouraging third parties
to adopt.
 This approach actually precludes a number of things we know we want to
 do in the future - for example it would be great if the native and AWS
 resource plugins were distributed as separate subpackages so that yum
 install heat-engine installed only the native resources, and a
 separate yum install heat-cfn-plugins added the AWS-compatibility
 resources. You can't (safely) package things that way if the
 installation would involve editing a config file.

Yes, patching a single setup.cfg is a non-starter. We would need to do
something like move the AWS resources to contrib with their own
setup.cfg; maybe that wouldn't be such a