On 09/07/14 18:33, Randall Burt wrote:
On Jul 9, 2014, at 4:38 PM, Zane Bitter <zbit...@redhat.com>
  wrote:
On 08/07/14 17:17, Steven Hardy wrote:

Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?

Good question. I believe the way WaitConditions work in AWS is that it sets up 
a pre-signed URL in a bucket owned by CloudFormation. If we went with that 
approach we would probably want some sort of quota, I imagine.

Just to clarify, you suggest that the swift-based signal mechanism use 
containers that Heat owns rather than ones owned by the user?

I'm suggesting that's one possible implementation, yes.

The other approach is to set up a new container, owned by the user, every time. 
In that case, a provider selecting this implementation would need to make it 
clear to customers if they would be billed for a WaitCondition resource. I'd 
prefer to avoid this scenario though (regardless of the plug-point).

Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what 
a theoretical operator may theoretically do. If the same theoretical provider 
were to also charge users to create a trust, would we then be concerned about 
that implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the 
container(s) as suggested above doesn't preclude that operator from charging 
the stack owner for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its 
understood what that option entails from a technical/usage perspective.

Fair enough, I'm not suggesting that I want to deny the operator the option of charging, more that I'd prefer to avoid pushing them into a corner where they feel like they'd _have_ to charge.

In fact, if we adopt the plugin system I am suggesting, we could in theory implement _both_ of the above options ;)

- What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?

I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
don't think it's a big overhead, though.

In the current implementation, I think it is minor as well, just a few extra 
Swift API calls which should be pretty minor overhead considering the stack as 
a whole. Plus, it minimizes the above concern around potentially costly user 
containers in that it gets rid of them as soon as its done.

One of the nice things about this is that it largely negates the need for charging, by making the containers fairly useless for nefarious purposes. Unfortunately there are some problems with it too, which I've noted in the review.

It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple "one shot" low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])

Yep, I am all for this. I think that Swift is the best way when we have it, but not 
every cloud has Swift (and the latest rumours from DefCore are that it's likely to 
stay that way), so we need operators (& developers!) to be able to plug in an 
alternative implementation.

Very true, but not every cloud has trusts either. Many may have trusts, but they don't 
employ the EC2 extensions to Keystone and therefore can't use the "native" 
signals either (as I understand them anyway). Point being that either way, we already 
impose requirements on a cloud you want to run Heat against. I think it in our interest 
to make the effort to provide choices with obvious trade-offs.

The AWS resources require EC2 extensions; Steve is making the native resources so that we won't require EC2 extensions any longer. When that is done, we should make the AWS resources use the same implementation (the implementation in AWS is closer to the Swift thing that Jason is working on than anything we have ever done).

So part of what I'm suggesting here is that both types of resources should use the same method:
 * If the cloud has Swift, both use the Swift method
 * If the cloud has an OpenStack API, both use the native method
* If the cloud has EC2 extensions and a cfn API, both use the existing method

I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.

Agree, this is an important question to ask. The downside to leaving the choice 
to the user is that it reduces interoperability between clouds. (In fact, it's 
unclear whether operators _would_ give users a choice, or just deploy one 
implementation anyway.) It's not insurmountable (thanks to environments), but 
it does add friction to the ecosystem so we have to weigh up the trade-offs.

Agreed that this is an important concern, but one of mine is that no other resource has 
"selectable" back-ends.

That's true at a certain level. (I'll overlook the networking resources that support both nova-network and Neutron.)

If you want to get technical, the openstack clients are now pluggable, so arguably all non-OS::Heat resources now have selectable back-ends ;)

But another way to look at it is that virtually all of the resources have selectable back-ends, but that said back-ends are isolated from Heat by the various (ReST) APIs of the OpenStack services. We need to get away from the idea that resources are just a place to hang code in Heat (this is what led to our autoscaling implementation); they should represent some _other_ object in OpenStack. Setting up a place to receive signals and a thing that reads them is not a resource, it's just something that Heat needs to be able to do (e.g. for software deployments), that can be exposed to the user as a resource as well. In that context, this doesn't seem so different to anything else except that the implementation is behind an internal API in Heat instead of a public ReST API.

The way an operator controls this today is via the global environment where 
they have the option to disable one or more of these resources or even alias 
one to the other. Seems a large change for something an operator already has 
the ability to deal with. The level of interoperability is at least partly an 
operator choice already and out of our hands.

Well, we can't control interoperability, but we can certainly influence it. We can make it easy to achieve or we can make it hard.

cheers,
Zane.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to