On 16/11/13 11:15, Angus Salkeld wrote:
On 15/11/13 08:46 -0600, Christopher Armstrong wrote:
On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter <[email protected]> wrote:

On 15/11/13 02:48, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld <[email protected]
<mailto:[email protected]>> wrote:

    On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

        http://docs.heatautoscale.__apiary.io/

        <http://docs.heatautoscale.apiary.io/>

        I've thrown together a rough sketch of the proposed API for
        autoscaling.
        It's written in API-Blueprint format (which is a simple subset
        of Markdown)
        and provides schemas for inputs and outputs using JSON-Schema.
        The source
        document is currently at
        https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp


<https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>


        Things we still need to figure out:

        - how to scope projects/domains. put them in the URL? get them
        from the
        token?
        - how webhooks are done (though this shouldn't affect the API
        too much;
        they're basically just opaque)

        Please read and comment :)


    Hi Chistopher

    In the group create object you have 'resources'.
    Can you explain what you expect in there? I thought we talked at
    summit about have a unit of scaling as a nested stack.

    The thinking here was:
    - this makes the new config stuff easier to scale (config get
applied
    Â  per scaling stack)

    - you can potentially place notification resources in the scaling
    Â  stack (think marconi message resource - on-create it sends a
    Â  message)

    - no need for a launchconfig
    - you can place a LoadbalancerMember resource in the scaling stack
    Â  that triggers the loadbalancer to add/remove it from the lb.


    I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of "resources" (a
mapping of
resources), just have "resource", which can be the template definition
for a single resource. This would then allow the user to specify a
Stack
resource if they want to provide multiple resources. How does that
sound?


My thought was this (digging into the implementation here a bit):

- Basically, the autoscaling code works as it does now: creates a
template
containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like (I'm not 100%
sure
about this one*).
- The user optionally supplies a template. If the template is
supplied, it
is passed to Heat and set in the environment as the provider for the
OS::Nova::Server resource.


I don't like the idea of binding to OS::Nova::Server specifically for
autoscaling. I'd rather have the ability to scale *any* resource,
including
nested stacks or custom resources. It seems like jumping through hoops to

big +1 here, autoscaling should not even know what it is scaling, just
some resource. solum might want to scale all sorts of non-server
resources (and other users).

I'm surprised by the negative reaction to what I suggested, which is a completely standard use of provider templates. Allowing a user-defined stack of resources to stand in for an unrelated resource type is the entire point of providers. Everyone says that it's a great feature, but if you try to use it for something they call it a "hack". Strange.

So, allow me to make a slight modification to my proposal:

- The autoscaling service manages a template containing OS::Heat::ScaledResource resources. This is an imaginary resource type that is not backed by a plugin in Heat. - If no template is supplied by the user, the environment declares another resource plugin as the provider for OS::Heat::ScaledResource (by default it would be OS::Nova::Server, but this should probably be configurable by the deployer... so if you had a region full of Docker containers and no Nova servers, you could set it to OS::Docker::Container or something). - If a provider template is supplied by the user, it would be specified as the provider in the environment file.

This, I hope, demonstrates that autoscaling needs no knowledge whatsoever about what it is scaling to use this approach.

The only way that it would require some knowledge is if we restricted the properties that can be passed to the launch config to match some particular interface, but I believe we already have a consensus that we don't want to do that.


This assumes that we need a default resource type, though it would be substantially unchanged if we didn't have a default resource type (we'd just make supplying the template mandatory). In my reply to you other post I put forward an argument why I don't think that we should have no default. If your objection is that the default is of a different type (Server vs. provider stack) to the general case then let's consider the different ways we could handle this:

1) As proposed above, just use OS::Nova::Server (or whatever type is configured in heat.conf) as the provider. - The autoscaling code won't need to know anything about it, everything is handled internally in Heat. - The default (most common) case avoids the overhead of a stack for every scaled resource. 2) Grab the default template from http://heat.example.com/<tenant_id>/resource_types/OS::Nova::Server/template (or whatever type is configured in heat.conf) as the provider.
 - The composition of all scaling groups is consistent.
 - Requires an extra ReST call.
3) Embed the default template in the autoscaling configuration.
 - The composition of all scaling groups is consistent.
 - No extra ReST API call
- The template could get out of date; there's no guarantee that it matches the plugin in Heat that we're talking to.

Those are in order of my personal preference. I wouldn't support (3), but I am fine with (2) if you think that consistency is worth the extra overhead. At the end of the day this is an implementation detail that is not visible to the user at all.

support custom resources by overriding OS::Nova::Server instead of just
allowing users to specify the resource that they really want directly.

How about we offer two "types" of configuration, one which supports
arbitrary resources and one which supports OS::Nova::Server-specific
launch
configurations? We could just add a type="server" / type="resource"
parameter which specifies which type of scaling unit to use.


How about just one "nested-stack".
Keep it simple.

+1

Why would we have two configurations:

    type: server
    resource_type: this is ignored

and:

   type: resource
   resource_type: OS::Nova::Server

that do exactly the same thing? This increases complexity and adds zero value to the user.

This should require no substantive changes to the code since it uses
existing abstractions, it makes the common case the default, and it
avoids
the overhead of nested stacks in the default case.

-1

As I said in reply to your other message, I think we're really only disagreeing about implementation. "Everything is a nested stack" and "Everything is a provider stack" are not really different ideas, just subtly different implementations. In both cases you pass a template to the API as the thing to scale. IMO the provider implementation is far better for the user though, because it enables them to use the tools we have already built to support that - e.g. they can grab the pass-through provider template for a resource from http://heat.example.com/<tenant_id>/resource_types/<resource_type>/template and modify it. Any other tools we build around providers will make autoscaling easier for free too.

Doing things like passing complex JSON properties to a template through the parameters are tricky, and we may need to change it over time. It's much better to encapsulate as much as possible of this inside Heat - and we have an existing mechanism to do so.

cheers,
Zane.

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to