On 08/07/14 17:17, Steven Hardy wrote:
On Tue, Jul 08, 2014 at 03:08:32PM -0400, Zane Bitter wrote:
I see that the new client plugins are loaded using stevedore, which is great
and IMO absolutely the right tool for that job. Thanks to Angus & Steve B
for implementing it.

Now that we have done that work, I think there are more places we can take
advantage of it too - for example, we currently have competing native wait
condition resource types being implemented by Jason[1] and Steve H[2]
respectively, and IMHO that is a mistake. We should have *one* native wait
condition resource type, one AWS-compatible one, software deployments and
any custom plugin that require signalling; and they should all use a common
SignalResponder implementation that would call an API that is pluggable
using stevedore. (In summary, what we're trying to make configurable is an
implementation that should be invisible to the user, not an interface that
is visible to the user, and therefore the correct unit of abstraction is an
API, not a resource.)

To clarify, they're not competing as such - Jason and I have chatted about
the two approaches and have been working to maintain a common interface,
such that they would be easily substituted based on deployer or user
preferences.

Yes, poor choice of words on my part :)

My initial assumption was that this substitution would happen via resource
mappings in the global environment, but I now see that you are proposing
the configurable part to be at a lower level, subsituting the transport
behind a common resource implementation.

Yeah, exactly.

Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?

Good question. I believe the way WaitConditions work in AWS is that it sets up a pre-signed URL in a bucket owned by CloudFormation. If we went with that approach we would probably want some sort of quota, I imagine.

The other approach is to set up a new container, owned by the user, every time. In that case, a provider selecting this implementation would need to make it clear to customers if they would be billed for a WaitCondition resource. I'd prefer to avoid this scenario though (regardless of the plug-point).

- What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?

I wasn't aware we were doing that, and I'm a bit unsure about it myself. I don't think it's a big overhead, though.

It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple "one shot" low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])

Yep, I am all for this. I think that Swift is the best way when we have it, but not every cloud has Swift (and the latest rumours from DefCore are that it's likely to stay that way), so we need operators (& developers!) to be able to plug in an alternative implementation.

I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.

Agree, this is an important question to ask. The downside to leaving the choice to the user is that it reduces interoperability between clouds. (In fact, it's unclear whether operators _would_ give users a choice, or just deploy one implementation anyway.) It's not insurmountable (thanks to environments), but it does add friction to the ecosystem so we have to weigh up the trade-offs.

cheers,
Zane.

[1] https://review.openstack.org/#/c/96947/
[2] https://review.openstack.org/#/c/91475/

Steve

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to