Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Christopher Armstrong
On Mon, Feb 24, 2014 at 4:20 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi Keith,
>
> Thank you for bringing up this question. We think that it could be done
> inside Heat. This is a part of our future roadmap to bring more stuff to
> Heat and pass all actual work to the heat engine. However it will require a
> collaboration between Heat and Murano teams, so that is why we want to have
> incubated status, to start better integration with other projects being a
> part of OpenStack community. I will understand Heat team when they refuse
> to change Heat templates to satisfy the requirements of the project which
> does not officially belong to OpenStack. With incubation status it will be
> much easier.
> As for the actual work, backups and snapshots are processes. It will be
> hard to express them in a good way in current HOT template. We see that we
> will use Mistral resources defined in Heat which will trig the events for
> backup and backup workflow associated with the application can be defined
> outside of Heat. I don't think that Heat team will include workflow
> definitions as a part of template format, while they can allow us to use
> resources which reference such workflows stored in a catalog. It can be an
> extension for HOT Software config for example, but we need to validate this
> approach with the heat team.
>
>
For what it's worth, there's already precedent for including non-OpenStack
resource plugins in Heat, in a "contrib" directory (which is still tested
with the CI infrastructure).




-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] lazy translation is breaking Heat

2014-02-18 Thread Christopher Armstrong
On Tue, Feb 18, 2014 at 11:14 AM, Jay S Bryant  wrote:

> All,
>
> Myself and Jim Carey have been working on getting the right solution for
> making lazy_translation work through Nova and Cinder.
>
> The patch should have also had changes to remove the use of str() in any
> LOG or exception messages as well as the removal of any places where
> strings were being '+' ed together.  In the case of Cinder we are doing it
> as two separate patches that are dependent.  I am surprised that this
> change got past Jenkins.  In the case of Cinder and Nova unit test caught a
> number of problems.
>
> We will make sure to work with Liang Chen to avoid this happening 
> again.<https://review.openstack.org/#/dashboard/7135>
>


fwiw Thomas Hervé has posted a patch to revert the introduction of lazy
translation:

https://review.openstack.org/#/c/74454/


-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] lazy translation is breaking Heat

2014-02-18 Thread Christopher Armstrong
I've filed a bug about this, https://bugs.launchpad.net/heat/+bug/1281644


On Tue, Feb 18, 2014 at 9:15 AM, Christopher Armstrong <
chris.armstr...@rackspace.com> wrote:

> This change was recently merged:
>
> https://review.openstack.org/#/c/69133/
>
> Unfortunately it didn't enable lazy translations for the unit tests, so it
> didn't catch the many places in Heat that won't work when lazy translations
> are enabled. Notably there are a lot of cases where the code adds the
> result of a call to _() with another string, and Message objects (which are
> returned from _ when lazy translations are enabled) can't be added,
> resulting in an exception being raised.
>
> I think the best course of action would be to revert this change, and then
> reintroduce it along with patches to fix all the problems, while enabling
> it for the unit tests so bugs won't be reintroduced in the future.
>
> Interestingly it also didn't fail any of the tempest tests, I'm not sure
> why.
>
> --
> IRC: radix
> Christopher Armstrong
>



-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] lazy translation is breaking Heat

2014-02-18 Thread Christopher Armstrong
This change was recently merged:

https://review.openstack.org/#/c/69133/

Unfortunately it didn't enable lazy translations for the unit tests, so it
didn't catch the many places in Heat that won't work when lazy translations
are enabled. Notably there are a lot of cases where the code adds the
result of a call to _() with another string, and Message objects (which are
returned from _ when lazy translations are enabled) can't be added,
resulting in an exception being raised.

I think the best course of action would be to revert this change, and then
reintroduce it along with patches to fix all the problems, while enabling
it for the unit tests so bugs won't be reintroduced in the future.

Interestingly it also didn't fail any of the tempest tests, I'm not sure
why.

-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-04 Thread Christopher Armstrong
On Tue, Feb 4, 2014 at 7:34 PM, Robert Collins wrote:

> On 5 February 2014 13:14, Zane Bitter  wrote:
>
>
> > That's not a great example, because one DB server depends on the other,
> > forcing them into updating serially anyway.
> >
> > I have to say that even in general, this whole idea about applying update
> > policies to non-grouped resources doesn't make a whole lot of sense to
> me.
> > For non-grouped resources you control the resource definitions
> individually
> > - if you don't want them to update at a particular time, you have the
> option
> > of just not updating them.
>
> Well, I don't particularly like the idea of doing thousands of
> discrete heat stack-update calls, which would seem to be what you're
> proposing.
>
> On groups: autoscale groups are a problem for secure minded
> deployments because every server has identical resources (today) and
> we very much want discrete credentials per server - at least this is
> my understanding of the reason we're not using scaling groups in
> TripleO.
>
> > Where you _do_ need it is for scaling groups where every server is based
> on
> > the same launch config, so you need a way to control the members
> > individually - by batching up operations (done), adding delays (done) or,
> > even better, notifications and callbacks.
> >
> > So it seems like doing 'rolling' updates for any random subset of
> resources
> > is effectively turning Heat into something of a poor-man's workflow
> service,
> > and IMHO that is probably a mistake.
>
> I mean to reply to the other thread, but here is just as good :) -
> heat as a way to describe the intended state, and heat takes care of
> transitions, is a brilliant model. It absolutely implies a bunch of
> workflows - the AWS update policy is probably the key example.
>
> Being able to gracefully, *automatically* work through a transition
> between two defined states, allowing the nodes in question to take
> care of their own needs along the way seems like a pretty core
> function to fit inside Heat itself. Its not at all the same as 'allow
> users to define abitrary workflows'.
>
> -Rob
>
>
Agreed. I have been assuming that the autoscaling service outside of the
Heat engine would need to send several pre-calculated template changes in
sequence in order to implement rolling updates for resource groups, but I
think it would be much much better if Heat could take care of this as a
core feature.



-- 
Christopher Armstrong
http://twitter.com/radix/
http://github.com/radix/
http://radix.twistedmatrix.com/
http://planet-if.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-04 Thread Christopher Armstrong
'd like to model though. This is quite
>> concrete and would be entirely hidden from template authors, though not
>> from resource plugin authors. Attributes sound like something where you
>> want the template authors to get involved in specifying, but maybe that
>> was just an overloaded term.
>>
>> So perhaps we can replace this interface with the generic one when your
>> use case is more clear?
>>
>
> I'm not sure about the implementation Thomas proposed, but I believe the
> use case he has in mind is the third of the four I listed above (replace a
> server in a scaling group).
>
>

I think another use case is temporarily removing a server from a load
balancer when it's being e.g. resized.


-- 
IRC: radix
http://twitter.com/radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Christopher Armstrong
Heya Clint, this BP looks really good - it should significantly simplify
the implementation of scaling if this becomes a core Heat feature. Comments
below.

On Mon, Feb 3, 2014 at 2:46 PM, Thomas Herve wrote:

> > So, I wrote the original rolling updates spec about a year ago, and the
> > time has come to get serious about implementation. I went through it and
> > basically rewrote the entire thing to reflect the knowledge I have
> > gained from a year of working with Heat.
> >
> > Any and all comments are welcome. I intend to start implementation very
> > soon, as this is an important component of the HA story for TripleO:
> >
> > https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
>
> Hi Clint, thanks for pushing this.
>
> First, I don't think RollingUpdatePattern and CanaryUpdatePattern should
> be 2 different entities. The second just looks like a parametrization of
> the first (growth_factor=1?).
>
>
Agreed.



> I then feel that using (abusing?) depends_on for update pattern is a bit
> weird. Maybe I'm influenced by the CFN design, but the separate
> UpdatePolicy attribute feels better (although I would probably use a
> property). I guess my main question is around the meaning of using the
> update pattern on a server instance. I think I see what you want to do for
> the group, where child_updating would return a number, but I have no idea
> what it means for a single resource. Could you detail the operation a bit
> more in the document?
>
>

I agree that depends_on is weird and I think it should be avoided. I'm not
sure a property is the right decision, though, assuming that it's the heat
engine that's dealing with the rolling updates -- I think having the engine
reach into a resource's properties would set a strange precedent. The CFN
design does seem pretty reasonable to me, assuming an "update_policy" field
in a HOT resource, referring to the policy that the resource should use.


It also seems that the interface you're creating
> (child_creating/child_updating) is fairly specific to your use case. For
> autoscaling we have a need for more generic notification system, it would
> be nice to find common grounds. Maybe we can invert the relationship? Add a
> "notified_resources" attribute, which would call hooks on the "parent" when
> actions are happening.
>
>

Yeah, this would be really helpful for stuff like load balancer
notifications (and any of a number of different resource relationships).

-- 
IRC: radix
http://twitter.com/radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack convergence first steps

2013-12-05 Thread Christopher Armstrong
On Thu, Dec 5, 2013 at 7:25 PM, Randall Burt wrote:

>  On Dec 5, 2013, at 6:25 PM, Christopher Armstrong <
> chris.armstr...@rackspace.com>
>  wrote:
>
>   On Thu, Dec 5, 2013 at 3:50 PM, Anderson Mesquita <
> anderson...@thoughtworks.com> wrote:
>
>> Hey stackers,
>>
>> We've been working towards making stack convergence (
>> https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step
>> closer to being ready at a time.  After the first patch was submitted we
>> got positive feedback on it as well as some good suggestions as to how to
>> move it forward.
>>
>> The first step (https://blueprints.launchpad.net/heat/+spec/stack-check)
>> is to get all the statuses back from the real world resources and update
>> our stacks accordingly so that we'll be able to move on to the next step:
>> converge it to the desired state, fixing any errors that may have happened.
>>
>> We just submitted another WiP for review, and as we were doing it, a few
>> questions were raised and we'd like to get everybody's input on them. Our
>> main concern is around the use and purpose of the `status` of a
>> stack/resource.  `status` currently appears to represent the status of the
>> last action taken, and it seems that we may need to repurpose it or
>> possibly create something else to represent a stack's "health" (i.e.
>> everything is up and running as expected, something smells fishy, something
>> broke, stack's is doomed).  We described this thoroughly here:
>> https://etherpad.openstack.org/p/heat-convergence
>>
>> Any thoughts?
>>
>> Cheers,
>>
>>
>  I think a lot of OpenStack projects use "status" fields as "status of
> the most recent operation", and I think it's totally wrong. "status" should
> be a known state of the _object_, not an action, and if we need statuses
> for operations, then those operations should be addressable REST objects.
> Of course there are cases where object status should be updated to reflect
> an operating status if it's a completely exclusive operation (BUILDING and
> DELETING make sense, for example).
>
>
>  Actually, I think most projects are the opposite where "status" means
> "what's the state of the resource" (Nova, Trove, Cinder, etc), whereas Heat
> uses status as the state of the last operation. Probably wouldn't be too
> terrible to have a new "state" for stacks and their resources then perhaps
> deprecate and use "status" in the accepted way in the v2 API?
>
>

Well, my point is that it's done inconsistently. Yes, it's mostly used as
an object status, but nova for example uses it as an operation status for
things like resize.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack convergence first steps

2013-12-05 Thread Christopher Armstrong
On Thu, Dec 5, 2013 at 3:50 PM, Anderson Mesquita <
anderson...@thoughtworks.com> wrote:

> Hey stackers,
>
> We've been working towards making stack convergence (
> https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step
> closer to being ready at a time.  After the first patch was submitted we
> got positive feedback on it as well as some good suggestions as to how to
> move it forward.
>
> The first step (https://blueprints.launchpad.net/heat/+spec/stack-check)
> is to get all the statuses back from the real world resources and update
> our stacks accordingly so that we'll be able to move on to the next step:
> converge it to the desired state, fixing any errors that may have happened.
>
> We just submitted another WiP for review, and as we were doing it, a few
> questions were raised and we'd like to get everybody's input on them. Our
> main concern is around the use and purpose of the `status` of a
> stack/resource.  `status` currently appears to represent the status of the
> last action taken, and it seems that we may need to repurpose it or
> possibly create something else to represent a stack's "health" (i.e.
> everything is up and running as expected, something smells fishy, something
> broke, stack's is doomed).  We described this thoroughly here:
> https://etherpad.openstack.org/p/heat-convergence
>
> Any thoughts?
>
> Cheers,
>
>
I think a lot of OpenStack projects use "status" fields as "status of the
most recent operation", and I think it's totally wrong. "status" should be
a known state of the _object_, not an action, and if we need statuses for
operations, then those operations should be addressable REST objects. Of
course there are cases where object status should be updated to reflect an
operating status if it's a completely exclusive operation (BUILDING and
DELETING make sense, for example).

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-05 Thread Christopher Armstrong
On Thu, Dec 5, 2013 at 3:26 AM, Julien Danjou  wrote:

> On Wed, Dec 04 2013, Georgy Okrokvertskhov wrote:
>
> > Quick summary: you can't use unicode() function and u' ' strings in
> Pyhton3.
>
> Not that it's advised, but you can use u' ' back again with Python 3.3.
>
>
And this is a very useful feature for projects that want to have a single
codebase that runs on both python 2 and python 3, so it's worth taking
advantage of.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements & roadmap

2013-11-26 Thread Christopher Armstrong
On Tue, Nov 26, 2013 at 4:35 PM, Tim Schnell wrote:

>
>From: Christopher Armstrong 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, November 26, 2013 4:02 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements
> & roadmap
>
>On Tue, Nov 26, 2013 at 3:24 PM, Tim Schnell  > wrote:
>
>> So the originally question that I attempted to pose was, "Can we add a
>> schema-less metadata section to the template that can be used for a
>> variety of purposes?". It looks like the answer is no, we need to discuss
>> the features that would go in the metadata section and add them to the HOT
>> specification if they are viable. I don't necessarily agree with this
>> answer but I accept it as viable and take responsibility for the
>> long-winded process that it took to get to this point.
>>
>> I think some valid points have been made and I have re-focused my efforts
>> into the following proposed solution.
>>
>> I am fine with getting rid of the concept of a schema-less metadata
>> section. If we can arrive at a workable design for a few use cases then I
>> think that we won't need to discuss any of the options that Zane mentioned
>> for handling the metadata section, comments, separate file, or in the
>> template body.
>>
>> Use Case #1
>> I see valid value in being able to group templates based on a type or
>> keyword. This would allow any client, Horizon or a Template Catalog
>> service, to better organize and handle display options for an end-user.
>>
>> I believe that Ladislav initially proposed a solution that will work here.
>> So I will second a proposal that we add a new top-level field to the HOT
>> specification called "keywords" that contains this template type.
>>
>> keywords: wordpress, mysql, etc
>>
>>
>>
>  My immediate inclination would be to just make keywords/tags out-of-band
> metadata managed by the template repository. I imagine this would be
> something that would be very useful to change without having to edit the
> template anyway.
>
>  *I'm not exactly sure what you are suggesting here, but I think that
> adding these keywords to the template will be less error prone than
> attempting to derive them some other way.*
>
>
Basically, I'm just suggesting putting the tags outside of template. Not
deriving them -- I still think they should be explicitly specified, but
just putting them in e.g. the database instead of directly in the template.

Basically, in a public repository of templates, I can imagine tags being
based on third-party or moderator input, instead of just based on what the
template author says.  Keeping them outside of the template would allow
content moderators to do that without posting a new version of the template.

Anyway, I don't feel that strongly about this - if there's a strong enough
desire to see tags in the template, then I won't argue against it.



>
>
>> Use Case #2
>> The template author should also be able to explicitly define a help string
>> that is distinct and separate from the description of an individual
>> parameter. An example where this use case originated was with Nova
>> Keypairs. The description of a keypair parameter might be something like,
>> "This is the name of a nova key pair that will be used to ssh to the
>> compute instance." A help string for this same parameter would be, "To
>> learn more about nova keypairs click on this help article."
>>
>> I propose adding an additional field to the parameter definition:
>>
>> Parameters:
>> :
>> description: This is the name of a nova key pair
>> that will be used to
>> ssh to the compute instance.
>> help: To learn more about nova key pairs click on
>> this > href="/some/url/">help article.
>>
>>
>  This one seems a bit weirder. I don't really understand what's wrong
> with just adding this content to the description field. However, if there
> are currently any objects in HOT that don't have any mechanism for
> providing a description, we should definitely add them where they're
> missing. Do you think we need to extend the semantics of the "description"
> field to allow HTML?
>
>  *Description and help are separate things from a UI perspective. A
> descr

Re: [openstack-dev] [heat][horizon]Heat UI related requirements & roadmap

2013-11-26 Thread Christopher Armstrong
On Tue, Nov 26, 2013 at 3:24 PM, Tim Schnell wrote:

> So the originally question that I attempted to pose was, "Can we add a
> schema-less metadata section to the template that can be used for a
> variety of purposes?". It looks like the answer is no, we need to discuss
> the features that would go in the metadata section and add them to the HOT
> specification if they are viable. I don't necessarily agree with this
> answer but I accept it as viable and take responsibility for the
> long-winded process that it took to get to this point.
>
> I think some valid points have been made and I have re-focused my efforts
> into the following proposed solution.
>
> I am fine with getting rid of the concept of a schema-less metadata
> section. If we can arrive at a workable design for a few use cases then I
> think that we won't need to discuss any of the options that Zane mentioned
> for handling the metadata section, comments, separate file, or in the
> template body.
>
> Use Case #1
> I see valid value in being able to group templates based on a type or
> keyword. This would allow any client, Horizon or a Template Catalog
> service, to better organize and handle display options for an end-user.
>
> I believe that Ladislav initially proposed a solution that will work here.
> So I will second a proposal that we add a new top-level field to the HOT
> specification called "keywords" that contains this template type.
>
> keywords: wordpress, mysql, etc
>
>
>
My immediate inclination would be to just make keywords/tags out-of-band
metadata managed by the template repository. I imagine this would be
something that would be very useful to change without having to edit the
template anyway.



> Use Case #2
> The template author should also be able to explicitly define a help string
> that is distinct and separate from the description of an individual
> parameter. An example where this use case originated was with Nova
> Keypairs. The description of a keypair parameter might be something like,
> "This is the name of a nova key pair that will be used to ssh to the
> compute instance." A help string for this same parameter would be, "To
> learn more about nova keypairs click on this help article."
>
> I propose adding an additional field to the parameter definition:
>
> Parameters:
> :
> description: This is the name of a nova key pair
> that will be used to
> ssh to the compute instance.
> help: To learn more about nova key pairs click on
> this  href="/some/url/">help article.
>
>
This one seems a bit weirder. I don't really understand what's wrong with
just adding this content to the description field. However, if there are
currently any objects in HOT that don't have any mechanism for providing a
description, we should definitely add them where they're missing. Do you
think we need to extend the semantics of the "description" field to allow
HTML?



> Use Case #3
> Grouping parameters would help the client make smarter decisions about how
> to display the parameters for input to the end-user. This is so that all
> parameters related to some database resource can be intelligently grouped
> together. In addition to grouping these parameters together, there should
> be a method to ensuring that the order within the group of parameters can
> be explicitly stated. This way, the client can return a group of database
> parameters and the template author can indicate that the database instance
> name should be first, then the username, then the password, instead of
> that group being returned in a random order.
>
> Parameters:
> db_name:
> group: db
> order: 0
> db_username:
> group: db
> order: 1
> db_password:
> group: db
> order: 2
> web_node_name:
> group: web_node
> order: 0
> keypair:
> group: web_node
> order: 1
>
>
>
>
Have you considered just rendering them in the order that they appear in
the template? I realize it's not the name (since you don't have any group
names that you could use as a title for "boxes" around groups of
parameters), but it might be a good enough compromise. If you think it's
absolutely mandatory to be able to group them in named groups, then I would
actually propose a prettier syntax:

ParameterGroups:
    db:
    name

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 12:31 PM, Zane Bitter  wrote:

> On 21/11/13 18:44, Christopher Armstrong wrote:
>
>>
>> 2) It relies on a plugin being present for any type of thing you
>> might want to notify.
>>
>>
>> I don't understand this point. What do you mean by a plugin? I was
>> assuming OS::Neutron::PoolMember (not LoadBalancerMember -- I went and
>> looked up the actual name) would become a standard Heat resource, not a
>> third-party thing (though third parties could provide their own through
>> the usual heat extension mechanisms).
>>
>
> I mean it requires a resource type plugin written in Python. So cloud
> operators could provide their own implementations, but ordinary users could
> not.
>
>
Okay, but that sounds like a general problem to solve (custom third-party
plugins supplied by the user instead of cloud operators, which is an idea I
really love btw), and I don't see why it should be a point against the idea
of simply using a Neutron::PoolMember in a scaling unit.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter  wrote:

> On 20/11/13 23:49, Christopher Armstrong wrote:
>
>> On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter > <mailto:zbit...@redhat.com>> wrote:
>>
>> On 20/11/13 16:07, Christopher Armstrong wrote:
>>
>> On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter > <mailto:zbit...@redhat.com>
>> <mailto:zbit...@redhat.com <mailto:zbit...@redhat.com>>> wrote:
>>
>>  On 19/11/13 19:14, Christopher Armstrong wrote:
>>
>> thought we had a workable solution with the "LoadBalancerMember"
>> idea,
>> which you would use in a way somewhat similar to
>> CinderVolumeAttachment
>> in the above example, to hook servers up to load balancers.
>>
>>
>> I haven't seen this proposal at all. Do you have a link? How does it
>> handle the problem of wanting to notify an arbitrary service (i.e.
>> not necessarily a load balancer)?
>>
>>
>> It's been described in the autoscaling wiki page for a while, and I
>> thought the LBMember idea was discussed at the summit, but I wasn't
>> there to verify that :)
>>
>> https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F
>>
>> Basically, the LoadBalancerMember resource (which is very similar to the
>> CinderVolumeAttachment) would be responsible for removing and adding IPs
>> from/to the load balancer (which is actually a direct mapping to the way
>> the various LB APIs work). Since this resource lives with the server
>> resource inside the scaling unit, we don't really need to get anything
>> _out_ of that stack, only pass _in_ the load balancer ID.
>>
>
> I see a couple of problems with this approach:
>
> 1) It makes the default case hard. There's no way to just specify a server
> and hook it up to a load balancer like you can at the moment. Instead, you
> _have_ to create a template (or template snippet - not really any better)
> to add this extra resource in, even for what should be the most basic,
> default case (scale servers behind a load balancer).
>

We can provide a standard resource/template for this, LoadBalancedServer,
to make the common case trivial and only require the user to pass
parameters, not a whole template.


> 2) It relies on a plugin being present for any type of thing you might
> want to notify.


I don't understand this point. What do you mean by a plugin? I was assuming
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the
actual name) would become a standard Heat resource, not a third-party thing
(though third parties could provide their own through the usual heat
extension mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a
pretty standard design).


>
> At summit and - to the best of my recollection - before, we talked about
> scaling a generic group of resources and passing notifications to a generic
> controller, with the types of both defined by the user. I was expecting you
> to propose something based on webhooks, which is why I was surprised not to
> see anything about it in the API. (I'm not prejudging that that is the way
> to go... I'm actually wondering if Marconi has a role to play here.)
>
>
I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest
of our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind
notifications or how they would work to solve our problems. You can keep
saying that the notifications idea will solve our problems, but I can't
figure out how it would solve our problem unless someone actually explains
it :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Christopher Armstrong
On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter  wrote:

> On 20/11/13 16:07, Christopher Armstrong wrote:
>
>> On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter > <mailto:zbit...@redhat.com>> wrote:
>>
>> On 19/11/13 19:14, Christopher Armstrong wrote:
>>
>>
>>
>> [snip]
>>
>>
>>
>>
>> There are a number of advantages to including the whole template,
>> rather than a resource snippet:
>>   - Templates are versioned!
>>   - Templates accept parameters
>>   - Templates can provide outputs - we'll need these when we go to
>> do notifications (e.g. to load balancers).
>>
>> The obvious downside is there's a lot of fiddly stuff to include in
>> the template (hooking up the parameters and outputs), but this is
>> almost entirely mitigated by the fact that the user can get a
>> template, ready built with the server hooked up, from the API by
>> hitting /resource_types/OS::Nova::__Server/template and just edit in
>>
>> the Volume and VolumeAttachment. (For a different example, they
>> could of course begin with a different resource type - the launch
>> config accepts any keys for parameters.) To the extent that this
>> encourages people to write templates where the outputs are actually
>> supplied, it will help reduce the number of people complaining their
>> load balancers aren't forwarding any traffic because they didn't
>> surface the IP addresses.
>>
>>
>>
>> My immediate reaction is to counter-propose just specifying an entire
>> template instead of parameters and template separately, but I think the
>>
>
> As an API, I think that would be fine, though inconsistent between the
> default (no template provided) and non-default cases. When it comes to
> implementing Heat resources to represent those, however, it would make the
> templates much less composable. If you wanted to reference anything from
> the surrounding template (including parameters), you would have to define
> the template inline and resolve references there. Whereas if you can pass
> parameters, then you only need to include the template from a separate
> file, or to reference a URL.


Yeah, that's a good point, but I could also imagine if you're *not*
actually trying to dynamically parameterize the "flavor" and "image" in the
above example, you wouldn't need to use parameters at all, so the example
could get a bit shorter.

(to diverge from the topic momentarily) I've been getting a little bit
concerned about how we'll deal with templates-within-templates... It seems
a *bit* unfortunate that users will be forced to use separate files for
their scaled and outer templates, instead of having the option to specify
them inline, but I can't think of a very satisfying way to solve that
problem. Maybe an "escape" function that prevents heat from evaluating any
of the function calls inside?


>
>  crux will be this point you mentioned:
>>
>>   - Templates can provide outputs - we'll need these when we go to do
>> notifications (e.g. to load balancers).
>>
>> Can you explain this in a bit more depth? It seems like whatever it is
>> may be the real deciding factor that means that your proposal can do
>> something that a "resources" or a "template" parameter can't do.  I
>>
>
> What I'm proposing _is_ a "template" parameter... I don't see any
> difference. A "resources" parameter couldn't do this though, because the
> resources section obviously doesn't contain outputs.
>
> In any event, when we notify a Load Balancer, or _any_ type of thing that
> needs a notification, we need to pass it some data. At the moment, for load
> balancers, we pass the IDs of the servers (I originally thought we passed
> IP addresses directly, hence possibly misleading comments earlier). But our
> scaling unit is a template which may contain multiple servers, or no
> servers. And the thing that gets notified may not even be a load balancer.
> So there is no way to infer what the right data to send is, we will need
> the user to tell us. The outputs section of the template seems like a good
> mechanism to do it.
>
>
Hmm, okay. I still don't think I understand entirely how you expect outputs
to be used, especially in context of the AS API. Can you give an example of
how they would actually be used? I guess I don't yet understand all the
implications of "notification" -- is that a new idea for icehouse?

For what it's worth, I'm coming around to the idea of specifying the 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Christopher Armstrong
On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter  wrote:

> On 19/11/13 19:14, Christopher Armstrong wrote:
>
>>
>>
[snip]



>> It'd be interesting to see some examples, I think. I'll provide some
>> examples of my proposals, with the following caveats:
>>
>
> Excellent idea, thanks :)
>
>
>  - I'm assuming a separation of launch configuration from scaling group,
>> as you proposed -- I don't really have a problem with this.
>> - I'm also writing these examples with the plural "resources" parameter,
>> which there has been some bikeshedding around - I believe the structure
>> can be the same whether we go with singular, plural, or even
>> whole-template-as-a-string.
>>
>> # trivial example: scaling a single server
>>
>> POST /launch_configs
>>
>> {
>>  "name": "my-launch-config",
>>  "resources": {
>>  "my-server": {
>>  "type": "OS::Nova::Server",
>>  "properties": {
>>  "image": "my-image",
>>  "flavor": "my-flavor", # etc...
>>  }
>>  }
>>  }
>> }
>>
>
> This case would be simpler with my proposal, assuming we allow a default:
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   }
>  }
>
> If we don't allow a default it might be something more like:
>
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   },
>   "provider_template_uri": "http://heat.example.com/<
> tenant_id>/resources_types/OS::Nova::Server/template"
>
>  }
>
>
>  POST /groups
>>
>> {
>>  "name": "group-name",
>>  "launch_config": "my-launch-config",
>>  "min_size": 0,
>>  "max_size": 0,
>> }
>>
>
> This would be the same.
>
>
>
>> (and then, the user would continue on to create a policy that scales the
>> group, etc)
>>
>> # complex example: scaling a server with an attached volume
>>
>> POST /launch_configs
>>
>> {
>>  "name": "my-launch-config",
>>  "resources": {
>>  "my-volume": {
>>  "type": "OS::Cinder::Volume",
>>  "properties": {
>>  # volume properties...
>>  }
>>  },
>>  "my-server": {
>>  "type": "OS::Nova::Server",
>>  "properties": {
>>  "image": "my-image",
>>  "flavor": "my-flavor", # etc...
>>  }
>>  },
>>  "my-volume-attachment": {
>>  "type": "OS::Cinder::VolumeAttachment",
>>  "properties": {
>>  "volume_id": {"get_resource": "my-volume"},
>>  "instance_uuid": {"get_resource": "my-server"},
>>  "mountpoint": "/mnt/volume"
>>  }
>>  }
>>  }
>> }
>>
>
> This appears slightly more complex on the surface; I'll explain why in a
> second.
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   }
>   "provider_template": {
>   "hot_format_version": "some random date",
>   "parameters" {
>   "image_name": {
>   "type": "string"
>   },
>   "flavor": {
>   "type": "string"
>   } # &c. ...
>
>   },
>   "resources" {
>  

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-19 Thread Christopher Armstrong
On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter  wrote:

> On 16/11/13 11:15, Angus Salkeld wrote:
>
>> On 15/11/13 08:46 -0600, Christopher Armstrong wrote:
>>
>>> On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter  wrote:
>>>
>>>  On 15/11/13 02:48, Christopher Armstrong wrote:
>>>>
>>>>  On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld >>>> <mailto:asalk...@redhat.com>> wrote:
>>>>>
>>>>> On 14/11/13 10:19 -0600, Christopher Armstrong wrote:
>>>>>
>>>>> http://docs.heatautoscale.__apiary.io/
>>>>>
>>>>> <http://docs.heatautoscale.apiary.io/>
>>>>>
>>>>> I've thrown together a rough sketch of the proposed API for
>>>>> autoscaling.
>>>>> It's written in API-Blueprint format (which is a simple subset
>>>>> of Markdown)
>>>>> and provides schemas for inputs and outputs using JSON-Schema.
>>>>> The source
>>>>> document is currently at
>>>>> https://github.com/radix/heat/__raw/as-api-spike/
>>>>> autoscaling.__apibp
>>>>>
>>>>>
>>>>> <https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>>>>> >
>>>>>
>>>>>
>>>>> Things we still need to figure out:
>>>>>
>>>>> - how to scope projects/domains. put them in the URL? get them
>>>>> from the
>>>>> token?
>>>>> - how webhooks are done (though this shouldn't affect the API
>>>>> too much;
>>>>> they're basically just opaque)
>>>>>
>>>>> Please read and comment :)
>>>>>
>>>>>
>>>>> Hi Chistopher
>>>>>
>>>>> In the group create object you have 'resources'.
>>>>> Can you explain what you expect in there? I thought we talked at
>>>>> summit about have a unit of scaling as a nested stack.
>>>>>
>>>>> The thinking here was:
>>>>> - this makes the new config stuff easier to scale (config get
>>>>> applied
>>>>> Â  per scaling stack)
>>>>>
>>>>> - you can potentially place notification resources in the scaling
>>>>> Â  stack (think marconi message resource - on-create it sends a
>>>>> Â  message)
>>>>>
>>>>> - no need for a launchconfig
>>>>> - you can place a LoadbalancerMember resource in the scaling stack
>>>>> Â  that triggers the loadbalancer to add/remove it from the lb.
>>>>>
>>>>>
>>>>> I guess what I am saying is I'd expect an api to a nested stack.
>>>>>
>>>>>
>>>>> Well, what I'm thinking now is that instead of "resources" (a
>>>>> mapping of
>>>>> resources), just have "resource", which can be the template definition
>>>>> for a single resource. This would then allow the user to specify a
>>>>> Stack
>>>>> resource if they want to provide multiple resources. How does that
>>>>> sound?
>>>>>
>>>>>
>>>> My thought was this (digging into the implementation here a bit):
>>>>
>>>> - Basically, the autoscaling code works as it does now: creates a
>>>> template
>>>> containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
>>>> with the properties obtained from the LaunchConfig, and creates a
>>>> stack in
>>>> Heat.
>>>> - LaunchConfig can now contain any properties you like (I'm not 100%
>>>> sure
>>>> about this one*).
>>>> - The user optionally supplies a template. If the template is
>>>> supplied, it
>>>> is passed to Heat and set in the environment as the provider for the
>>>> OS::Nova::Server resource.
>>>>
>>>>
>>>>  I don't like the idea of binding to OS::Nova::Server specifically for
>>> autoscaling. I'd rather have the ability to scale *any* resource,
>>> including
>>> nested stacks or custom resources. It seems like jumping through hoops to
>>>

Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Christopher Armstrong
On Mon, Nov 18, 2013 at 3:00 PM, Dan Smith  wrote:

> Sorry for the delay in responding to this...
>
> >   * Moved the _obj_classes registry magic out of ObjectMetaClass and into
> > its own method for easier use.  Since this is a subclass based
> implementation,
> > having a separate method feels more appropriate for a
> factory/registry
> > pattern.
>
> This is actually how I had it in my initial design because I like
> explicit registration. We went off on this MetaClass tangent, which buys
> us certain things, but which also makes certain things quite difficult.
>
> Pros for metaclass approach:
>  - Avoids having to decorate things (meh)
>  - Automatic to the point of not being able to create an object type
>without registering it even if you wanted to
>
> Cons for metaclass approach:
>  - Maybe a bit too magical
>  - Can make testing hard (see where we save/restore the registry
>between each test)
>  - I think it might make subclass implementations harder
>  - Definitely more complicated to understand
>
> Chris much preferred the metaclass approach, so I'm including him here.
> He had some reasoning that won out in the original discussion, although
> I don't really remember what that was.
>
>
It's almost always possible to go without metaclasses without losing much
relevant brevity, and improving clarity. I strongly recommend against their
use.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-17 Thread Christopher Armstrong
On Sun, Nov 17, 2013 at 2:57 PM, Steve Baker  wrote:

> On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
> > http://docs.heatautoscale.apiary.io/
> >
> > I've thrown together a rough sketch of the proposed API for
> > autoscaling. It's written in API-Blueprint format (which is a simple
> > subset of Markdown) and provides schemas for inputs and outputs using
> > JSON-Schema. The source document is currently
> > at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
> >
> Apologies if I'm about to re-litigate an old argument, but...
>
> At summit we discussed creating a new endpoint (and new pythonclient)
> for autoscaling. Instead I think the autoscaling API could just be added
> to the existing heat-api endpoint.
>
> Arguments for just making auto scaling part of heat api include:
> * Significantly less development, packaging and deployment configuration
> of not creating a heat-autoscaling-api and python-autoscalingclient
> * Autoscaling is orchestration (for some definition of orchestration) so
> belongs in the orchestration service endpoint
> * The autoscaling API includes heat template snippets, so a heat service
> is a required dependency for deployers anyway
> * End-users are still free to use the autoscaling portion of the heat
> API without necessarily being aware of (or directly using) heat
> templates and stacks
> * It seems acceptable for single endpoints to manage many resources (eg,
> the increasingly disparate list of resources available via the neutron API)
>
> Arguments for making a new auto scaling api include:
> * Autoscaling is not orchestration (for some narrower definition of
> orchestration)
> * Autoscaling implementation will be handled by something other than
> heat engine (I have assumed the opposite)
> (no doubt this list will be added to in this thread)
>
> What do you think?
>
>
I would be fine with this. Putting the API at the same endpoint as Heat's
API can be done whether we decide to document the API as a separate thing
or not. Would you prefer to see it as literally just more features added to
the Heat API, or an "autoscaling API" that just happens to live at the same
endpoint?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Christopher Armstrong
On Fri, Nov 15, 2013 at 4:16 AM, Zane Bitter  wrote:

> On 14/11/13 19:58, Christopher Armstrong wrote:
>
>> On Thu, Nov 14, 2013 at 10:44 AM, Zane Bitter > <mailto:zbit...@redhat.com>> wrote:
>>
>> On 14/11/13 18:51, Randall Burt wrote:
>>
>> Perhaps, but I also miss important information as a legitimate
>> caller as
>> to whether or not my scaling action actually happened or I've
>> been a
>> little too aggressive with my curl commands. The fact that I get
>> anything other than 404 (which the spec returns if its not a
>> legit hook)
>> means I've found *something* and can simply call it endlessly in
>> a loop
>> causing havoc. Perhaps the web hooks *should* be authenticated?
>> This
>> seems like a pretty large hole to me, especially if I can max
>> someone's
>> resources by guessing the right url.
>>
>>
>> Web hooks MUST be authenticated.
>>
>>
>>
>> Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?
>>
>
> Maybe an X-Auth-Token, though in many cases I imagine it would be derived
> from a Trust. In any event, it should be something provided by Keystone
> because that is where authentication implementations belong in OpenStack.
>
>
>  The idea was that webhooks are secret (and should generally only be
>> passed around through automated systems, not with human interaction).
>> This is usually how webhooks work, and it's actually how they work now
>> in Heat -- even though there's a lot of posturing about signed requests
>> and so forth, in the end they are literally just secret URLs that give
>> you the capability to perform some operation (if you have the URL, you
>> don't need anything else to execute them). I think we should simplify
>> this to to just be a random revokable blob.
>>
>
> This is the weakest possible form of security - the whole secret gets
> passed on the wire for every request and logged in innumerable places.
> There's no protection at all against replay attacks (other than, hopefully,
> SSL).
>
> A signature, a timestamp and a nonce all seem like prudent precautions to
> add.
>
>
I can get behind the idea of adding timestamp and nonce + signature for the
webhooks, as long as they're handled better than they are now :) i.e., the
webhook handler should assert that the timestamp is recent and
non-repeated. This probably means storing stuff in MySQL (or a centralized
in-memory DB). My understanding is that even though we have signed URLs for
webhooks in the current Heat autoscaling system, they're effectively just
static blobs.

My original proposal for simple webhooks was based entirely around the idea
that the current stuff is too complex, and offers no additional security
over a random string jammed into a URL. (signing a static random string
doesn't make it more guessable than the original random string...)

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Christopher Armstrong
On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter  wrote:

> On 15/11/13 02:48, Christopher Armstrong wrote:
>
>> On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld > <mailto:asalk...@redhat.com>> wrote:
>>
>> On 14/11/13 10:19 -0600, Christopher Armstrong wrote:
>>
>> http://docs.heatautoscale.__apiary.io/
>>
>> <http://docs.heatautoscale.apiary.io/>
>>
>> I've thrown together a rough sketch of the proposed API for
>> autoscaling.
>> It's written in API-Blueprint format (which is a simple subset
>> of Markdown)
>> and provides schemas for inputs and outputs using JSON-Schema.
>> The source
>> document is currently at
>> https://github.com/radix/heat/__raw/as-api-spike/
>> autoscaling.__apibp
>>
>> <https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>> >
>>
>>
>> Things we still need to figure out:
>>
>> - how to scope projects/domains. put them in the URL? get them
>> from the
>> token?
>> - how webhooks are done (though this shouldn't affect the API
>> too much;
>> they're basically just opaque)
>>
>> Please read and comment :)
>>
>>
>> Hi Chistopher
>>
>> In the group create object you have 'resources'.
>> Can you explain what you expect in there? I thought we talked at
>> summit about have a unit of scaling as a nested stack.
>>
>> The thinking here was:
>> - this makes the new config stuff easier to scale (config get applied
>> Â  per scaling stack)
>>
>> - you can potentially place notification resources in the scaling
>> Â  stack (think marconi message resource - on-create it sends a
>> Â  message)
>>
>> - no need for a launchconfig
>> - you can place a LoadbalancerMember resource in the scaling stack
>> Â  that triggers the loadbalancer to add/remove it from the lb.
>>
>>
>> I guess what I am saying is I'd expect an api to a nested stack.
>>
>>
>> Well, what I'm thinking now is that instead of "resources" (a mapping of
>> resources), just have "resource", which can be the template definition
>> for a single resource. This would then allow the user to specify a Stack
>> resource if they want to provide multiple resources. How does that sound?
>>
>
> My thought was this (digging into the implementation here a bit):
>
> - Basically, the autoscaling code works as it does now: creates a template
> containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
> with the properties obtained from the LaunchConfig, and creates a stack in
> Heat.
> - LaunchConfig can now contain any properties you like (I'm not 100% sure
> about this one*).
> - The user optionally supplies a template. If the template is supplied, it
> is passed to Heat and set in the environment as the provider for the
> OS::Nova::Server resource.
>
>
I don't like the idea of binding to OS::Nova::Server specifically for
autoscaling. I'd rather have the ability to scale *any* resource, including
nested stacks or custom resources. It seems like jumping through hoops to
support custom resources by overriding OS::Nova::Server instead of just
allowing users to specify the resource that they really want directly.

How about we offer two "types" of configuration, one which supports
arbitrary resources and one which supports OS::Nova::Server-specific launch
configurations? We could just add a type="server" / type="resource"
parameter which specifies which type of scaling unit to use.



> This should require no substantive changes to the code since it uses
> existing abstractions, it makes the common case the default, and it avoids
> the overhead of nested stacks in the default case.
>
> cheers,
> Zane.
>
> * One thing the existing LaunchConfig does is steer you in the direction
> of not doing things that won't work - e.g. you can't specify a volume to
> attach to the server, because you can't attach a single boot volume to
> multiple servers. The way to do that correctly will be to include the
> volume in the provider template. So maybe we should define a set of allowed
> properties for the LaunchConfig, and make people hard-code anything else
> they want to do in the provider template, just to make it harder to do
> wrong things. I'm worried that would make composition in general harder
> though.


If we offer a type="server" then the launch configuration can be restricted
to things that can automatically be scaled. I think if users want more
interesting scaling units they should use resources and specify both a
server and a volume as heat resources.

-- 
Christopher Armstrong
http://radix.twistedmatrix.com/
http://planet-if.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld  wrote:

> On 14/11/13 10:19 -0600, Christopher Armstrong wrote:
>
>> http://docs.heatautoscale.apiary.io/
>>
>> I've thrown together a rough sketch of the proposed API for autoscaling.
>> It's written in API-Blueprint format (which is a simple subset of
>> Markdown)
>> and provides schemas for inputs and outputs using JSON-Schema. The source
>> document is currently at
>> https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>>
>>
>> Things we still need to figure out:
>>
>> - how to scope projects/domains. put them in the URL? get them from the
>> token?
>> - how webhooks are done (though this shouldn't affect the API too much;
>> they're basically just opaque)
>>
>> Please read and comment :)
>>
>>
> Hi Chistopher
>
> In the group create object you have 'resources'.
> Can you explain what you expect in there? I thought we talked at
> summit about have a unit of scaling as a nested stack.
>
> The thinking here was:
> - this makes the new config stuff easier to scale (config get applied
>   per scaling stack)
> - you can potentially place notification resources in the scaling
>   stack (think marconi message resource - on-create it sends a
>   message)
> - no need for a launchconfig
> - you can place a LoadbalancerMember resource in the scaling stack
>   that triggers the loadbalancer to add/remove it from the lb.
>
> I guess what I am saying is I'd expect an api to a nested stack.
>
>
Well, what I'm thinking now is that instead of "resources" (a mapping of
resources), just have "resource", which can be the template definition for
a single resource. This would then allow the user to specify a Stack
resource if they want to provide multiple resources. How does that sound?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 12:52 PM, Randall Burt
wrote:

>
>  On Nov 14, 2013, at 1:05 PM, Christopher Armstrong <
> chris.armstr...@rackspace.com> wrote:
>
>  On Thu, Nov 14, 2013 at 11:00 AM, Randall Burt <
> randall.b...@rackspace.com> wrote:
>
>>
>> On Nov 14, 2013, at 12:44 PM, Zane Bitter 
>>  wrote:
>>
>> > On 14/11/13 18:51, Randall Burt wrote:
>> >>
>> >> On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
>> >> mailto:chris.armstr...@rackspace.com>>
>> >>  wrote:
>> >>
>> >>> On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
>> >>> mailto:randall.b...@rackspace.com>>
>> wrote:
>> >>>Regarding web hook execution and cool down, I think the response
>> >>>should be something like 307 if the hook is on cool down with an
>> >>>appropriate retry-after header.
>> >
>> > I strongly disagree with this even ignoring the security issue
>> mentioned below. Being in the cooldown period is NOT an error, and the
>> caller should absolutely NOT try again later - the request has been
>> received and correctly acted upon (by doing nothing).
>>
>>  But how do I know nothing was done? I may have very good reasons to
>> re-scale outside of ceilometer or other mechanisms and absolutely SHOULD
>> try again later.  As it stands, I have no way of knowing that my scaling
>> action didn't happen without examining my physical resources. 307 is a
>> legitimate response in these cases, but I'm certainly open to other
>> suggestions.
>>
>>
>  I agree there should be a way to find out what happened, but in a way
> that requires a more strongly authenticated request. My preference would be
> to use an audit log system (I haven't been keeping up with the current
> thoughts on the design for Heat's event/log API) that can be inspected via
> API.
>
>
>  Fair enough. I'm just thinking of folks who want to set this up but use
> external tools/monitoring solutions for the actual eventing. Having those
> tools grep through event logs seems a tad cumbersome, but I do understand
> the desire to make these un-authenticated secrets makes that terribly
> difficult.
>
>
Calling it "unauthenticated" might be a bit misleading; it's authenticated
by the knowledge of the URL (which implies a trust and policy to execute).


-- 
Christopher Armstrong
http://radix.twistedmatrix.com/
http://planet-if.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:00 AM, Randall Burt
wrote:

>
> On Nov 14, 2013, at 12:44 PM, Zane Bitter 
>  wrote:
>
> > On 14/11/13 18:51, Randall Burt wrote:
> >>
> >> On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
> >> mailto:chris.armstr...@rackspace.com>>
> >>  wrote:
> >>
> >>> On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
> >>> mailto:randall.b...@rackspace.com>>
> wrote:
> >>>Regarding web hook execution and cool down, I think the response
> >>>should be something like 307 if the hook is on cool down with an
> >>>appropriate retry-after header.
> >
> > I strongly disagree with this even ignoring the security issue mentioned
> below. Being in the cooldown period is NOT an error, and the caller should
> absolutely NOT try again later - the request has been received and
> correctly acted upon (by doing nothing).
>
> But how do I know nothing was done? I may have very good reasons to
> re-scale outside of ceilometer or other mechanisms and absolutely SHOULD
> try again later.  As it stands, I have no way of knowing that my scaling
> action didn't happen without examining my physical resources. 307 is a
> legitimate response in these cases, but I'm certainly open to other
> suggestions.
>
>
I agree there should be a way to find out what happened, but in a way that
requires a more strongly authenticated request. My preference would be to
use an audit log system (I haven't been keeping up with the current
thoughts on the design for Heat's event/log API) that can be inspected via
API.


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum/Heat] Is Solum really necessary?

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:04 AM, Sam Alba  wrote:

> Hi Jay,
>
> I think Heat is an ingredient for Solum. When you build a PaaS, you
> need to control the app at different levels:
>
> #1 describing your app (basically your stack)
> #2 Pushing your code
> #3 Deploying it
> #4 Controlling the runtime (restart, get logs, scale, changing
> resources allocation, etc...)
>
> I think Heat is a major component for step 3. But I think Heat's job
> ends at the end of the deployment (the status of the stack is
> "COMPLETED" in Heat after processing the template correctly). It's
> nice though to rely on Heat's template generation for describing the
> stack, it's one more thing to delegate to Heat.
>
> In other words, I see Heat as an engine for deployment (at least in
> the context of Solum) and have something on top to manage the other
> steps.
>

I'd say that Heat does (or should do) more than just the initial deployment
-- especially with recent discussion around healing / convergence.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 10:44 AM, Zane Bitter  wrote:

> On 14/11/13 18:51, Randall Burt wrote:
>
>>
>> On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
>> mailto:chris.armstr...@rackspace.com>>
>>   wrote:
>>
>>  On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
>>> mailto:randall.b...@rackspace.com>> wrote:
>>> Regarding web hook execution and cool down, I think the response
>>> should be something like 307 if the hook is on cool down with an
>>> appropriate retry-after header.
>>>
>>
> I strongly disagree with this even ignoring the security issue mentioned
> below. Being in the cooldown period is NOT an error, and the caller should
> absolutely NOT try again later - the request has been received and
> correctly acted upon (by doing nothing).
>
>
Yeah, I think it's fine to just let it always be 202. Also, they don't
actually return 404  when they don't exist -- I had that in an earlier
version of the spec but I thought I deleted it before posting it to this
list.



>
>  Indicating whether a webhook was found or whether it actually executed
>>> anything may be an information leak, since webhook URLs require no
>>> additional authentication other than knowledge of the URL itself.
>>> Responding with only 202 means that people won't be able to guess at
>>> random URLs and know when they've found one.
>>>
>>
>> Perhaps, but I also miss important information as a legitimate caller as
>> to whether or not my scaling action actually happened or I've been a
>> little too aggressive with my curl commands. The fact that I get
>> anything other than 404 (which the spec returns if its not a legit hook)
>> means I've found *something* and can simply call it endlessly in a loop
>> causing havoc. Perhaps the web hooks *should* be authenticated? This
>> seems like a pretty large hole to me, especially if I can max someone's
>> resources by guessing the right url.
>>
>
> Web hooks MUST be authenticated.
>


Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?

The idea was that webhooks are secret (and should generally only be passed
around through automated systems, not with human interaction). This is
usually how webhooks work, and it's actually how they work now in Heat --
even though there's a lot of posturing about signed requests and so forth,
in the end they are literally just secret URLs that give you the capability
to perform some operation (if you have the URL, you don't need anything
else to execute them). I think we should simplify this to to just be a
random revokable blob.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 10:46 AM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi,
>
> It would be great if API specs contain a list of attributes\parameters one
> can pass during group creation. I believe Zane already asked about
> LaunchConfig, but I think new autoscaling API creation was specifically
> designed to move from limited AWS ElasticLB to something with more broad
> features. There is a BP I submitted while ago
> https://blueprints.launchpad.net/heat/+spec/autoscaling-instancse-typization.
> We discussed it in IRC chat with heat team and we got to the conclusion
> that this will be supported in new autoscaling API. Probably it is already
> supported, but it is quite hard to figure this out from the existing API
> specs without examples.
>
>

The API spec does contain a list of attributes/parameters that you can pass
during group creation (and all the other operations) -- see the Schema
sections under each. In case you didn't notice, you can click on each
action to expand details under it.



> Thanks
> Georgy
>
>
> On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter  wrote:
>
>> On 14/11/13 17:19, Christopher Armstrong wrote:
>>
>>> http://docs.heatautoscale.apiary.io/
>>>
>>> I've thrown together a rough sketch of the proposed API for autoscaling.
>>> It's written in API-Blueprint format (which is a simple subset of
>>> Markdown) and provides schemas for inputs and outputs using JSON-Schema.
>>> The source document is currently atÂ
>>>
>>> https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>>>
>>>
>>> Things we still need to figure out:
>>>
>>> - how to scope projects/domains. put them in the URL? get them from the
>>> token?
>>> - how webhooks are done (though this shouldn't affect the API too much;
>>> they're basically just opaque)
>>>
>>
>> My 2c: the way I designed the Heat API was such that extant stacks can be
>> addressed uniquely by name. Humans are pretty good with names, not so much
>> with 128 bit numbers. The consequences of this for the design were:
>>  - names must be unique per-tenant
>>  - the tenant-id appears in the endpoint URL
>>
>> However, the rest of OpenStack seems to have gone in a direction where
>> the "name" is really just a comment field, everything is addressed only by
>> UUID. A consequence of this is that it renders the tenant-id in the URL
>> pointless, so many projects are removing it.
>>
>> Unfortunately, one result is that if you create a resource and e.g. miss
>> the Created response for any reason and thus do not have the UUID, there is
>> now no safe, general automated way to delete it again. (There are obviously
>> heuristics you could try.) To solve this problem, there is a proposal
>> floating about for clients to provide another unique ID when making the
>> request, which would render a retry of the request idempotent. That's
>> insufficient, though, because if you decide to roll back instead of retry
>> you still need a way to delete using only this ID.
>>
>> So basically, that design sucks for both humans (who have to remember
>> UUIDs instead of names) and machines (Heat). However, it appears that I am
>> in a minority of one on this point, so take it with a grain of salt.
>>
>>
>>  Please read and comment :)
>>>
>>
>> A few comments...
>>
>> #1 thing is that the launch configuration needs to be somehow
>> represented. In general we want the launch configuration to be a provider
>> template, but we'll want to create a shortcut for the obvious case of just
>> scaling servers. Maybe we pass a provider template (or URL) as well as
>> parameters, and the former is optional.
>>
>> Successful creates should return 201 Created, not 200 OK.
>>
>> Responses from creates should include the UUID as well as the URI.
>> (Getting into minor details here.)
>>
>> Policies are scoped within groups, so do they need a unique id or would a
>> name do?
>>
>> I'm not sure I understand the webhooks part... webhook-exec is the thing
>> that e.g. Ceilometer will use to signal an alarm, right? Why is it not
>> called something like /groups/{group_id}/policies/{policy_id}/alarm ?
>> (Maybe because it requires different auth middleware? Or does it?)
>>
>> And the other ones are setting up the notification actions? Can we call
>> them notifications instead of webhooks? (After all, in the future we will
>> probably want to add Marconi support, and maybe 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
Thanks for the comments, Zane.


On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter  wrote:

> On 14/11/13 17:19, Christopher Armstrong wrote:
>
>> http://docs.heatautoscale.apiary.io/
>>
>> I've thrown together a rough sketch of the proposed API for autoscaling.
>> It's written in API-Blueprint format (which is a simple subset of
>> Markdown) and provides schemas for inputs and outputs using JSON-Schema.
>> The source document is currently atÂ
>>
>> https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>>
>>
>> Things we still need to figure out:
>>
>> - how to scope projects/domains. put them in the URL? get them from the
>> token?
>> - how webhooks are done (though this shouldn't affect the API too much;
>> they're basically just opaque)
>>
>
> My 2c: the way I designed the Heat API was such that extant stacks can be
> addressed uniquely by name. Humans are pretty good with names, not so much
> with 128 bit numbers. The consequences of this for the design were:
>  - names must be unique per-tenant
>  - the tenant-id appears in the endpoint URL
>
> However, the rest of OpenStack seems to have gone in a direction where the
> "name" is really just a comment field, everything is addressed only by
> UUID. A consequence of this is that it renders the tenant-id in the URL
> pointless, so many projects are removing it.
>
> Unfortunately, one result is that if you create a resource and e.g. miss
> the Created response for any reason and thus do not have the UUID, there is
> now no safe, general automated way to delete it again. (There are obviously
> heuristics you could try.) To solve this problem, there is a proposal
> floating about for clients to provide another unique ID when making the
> request, which would render a retry of the request idempotent. That's
> insufficient, though, because if you decide to roll back instead of retry
> you still need a way to delete using only this ID.
>
> So basically, that design sucks for both humans (who have to remember
> UUIDs instead of names) and machines (Heat). However, it appears that I am
> in a minority of one on this point, so take it with a grain of salt.
>
>
>  Please read and comment :)
>>
>
> A few comments...
>
> #1 thing is that the launch configuration needs to be somehow represented.
> In general we want the launch configuration to be a provider template, but
> we'll want to create a shortcut for the obvious case of just scaling
> servers. Maybe we pass a provider template (or URL) as well as parameters,
> and the former is optional.
>
>
I'm a little unclear as to what point you're making here. Right now, the
"launch configuration" is specified in the scaling group by the "resources"
property of the request json body. It's not a full template, but just a
"snippet" of a set of resources you want scaled.

As an aside, maybe we should replace this with a singlular "resource" and
allow people to use a Stack resource if they want to represent multiple
resources.

I guess we can have a simpler API for using an old-style, server-specific
"launch configuration", but I am skeptical of the benefit, since specifying
a single Instance resource is pretty simple.



> Successful creates should return 201 Created, not 200 OK.
>

Okay, I'll update that. I think I also forgot to specify some success
responses for things that need them.


>
> Responses from creates should include the UUID as well as the URI.
> (Getting into minor details here.)
>

Okay.


> Policies are scoped within groups, so do they need a unique id or would a
> name do?
>

I guess we could get rid of the ID and only have a name, what do other
people think?


>
> I'm not sure I understand the webhooks part... webhook-exec is the thing
> that e.g. Ceilometer will use to signal an alarm, right? Why is it not
> called something like /groups/{group_id}/policies/{policy_id}/alarm ?
> (Maybe because it requires different auth middleware? Or does it?)
>

Mostly because it's unnecessary. The idea was to generate a secret, opaque,
revokable ID that maps to the specific policy.


>
> And the other ones are setting up the notification actions? Can we call
> them notifications instead of webhooks? (After all, in the future we will
> probably want to add Marconi support, and maybe even Mistral support.) And
> why are these attached to the policy? Isn't the notification connected to
> changes in the group, rather than anything specific to the policy? Am I
> misunderstanding how this works? What is the difference between 'uri' and
> 'capability_uri'?
>


Policies represent way

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
wrote:

>  Good stuff! Some questions/comments:
>
>  If web hooks are associated with policies and policies are independent
> entities, how does a web hook specify the scaling group to act on? Does
> calling the web hook activate the policy on every associated scaling group?
>
>
Not sure what you mean by "policies are independent entities". You may have
missed that the policy resource lives hierarchically under the group
resource. Policies are strictly associated with one scaling group, so when
a policy is executed (via a webhook), it's acting on the scaling group that
the policy is associated with.



>  Regarding web hook execution and cool down, I think the response should
> be something like 307 if the hook is on cool down with an appropriate
> retry-after header.
>

Indicating whether a webhook was found or whether it actually executed
anything may be an information leak, since webhook URLs require no
additional authentication other than knowledge of the URL itself.
Responding with only 202 means that people won't be able to guess at random
URLs and know when they've found one.



>  On Nov 14, 2013, at 10:57 AM, Randall Burt 
>  wrote:
>
>
>  On Nov 14, 2013, at 10:19 AM, Christopher Armstrong <
> chris.armstr...@rackspace.com>
>  wrote:
>
>  http://docs.heatautoscale.apiary.io/
>
>  I've thrown together a rough sketch of the proposed API for autoscaling.
> It's written in API-Blueprint format (which is a simple subset of Markdown)
> and provides schemas for inputs and outputs using JSON-Schema. The source
> document is currently at
> https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
>
>
>  Things we still need to figure out:
>
>  - how to scope projects/domains. put them in the URL? get them from the
> token?
>
>
>  This may be moot considering the latest from the keystone devs regarding
> token scoping to domains/projects. Basically, a token is scoped to a single
> domain/project from what I understood, so domain/project is implicit. I'm
> still of the mind that the tenant doesn't belong so early in the URI, since
> we can already surmise the actual tenant from the authentication context,
> but that's something for Openstack at large to agree on.
>
>  - how webhooks are done (though this shouldn't affect the API too much;
> they're basically just opaque)
>
>  Please read and comment :)
>
>
>  --
>  IRC: radix
> Christopher Armstrong
> Rackspace
>   ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of Markdown)
and provides schemas for inputs and outputs using JSON-Schema. The source
document is currently at
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)

Please read and comment :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 4:29 AM, Renat Akhmerov wrote:

> As for EventScheduler proposal, I think it actually fits Mistral model
> very well. What described in EvenScheduler is basically the ability to
> configure webhooks to be called periodically or at a certain time. First of
> all, from the very beginning the concept of scheduling has been considered
> a very important capability of Mistral. And from Mistral perspective
> calling a webhook is just a workflow consisting of one task. In order to
> simplify consumption of the service we can implement API methods to work
> specifically with webhooks in a convenient way (without providing any
> workflow definitions using DSL etc.). I have already suggested before that
> we can provide API shortcuts for scheduling individual tasks rather than
> complex workflows so it has an adjacent meaning.
>
> I other words, I now tend to think it doesn’t make sense to have
> EventScheduler a standalone service.
>
> What do you think?
>
>
I agree that I don't think it makes sense to have a whole new project just
for EventScheduler. Mistral seems like a pretty good fit. Convenience APIs
similar to the EventScheduler API for just saying "run this webhook on this
schedule" would be nice, too, but I wouldn't raise a fuss if they didn't
exist and I had to actually define a trivial workflow.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-12 Thread Christopher Armstrong
Given the recent discussion of scheduled autoscaling at the summit session
on autoscaling, I looked into the state of scheduling-as-a-service in and
around OpenStack. I found two relevant wiki pages:

https://wiki.openstack.org/wiki/EventScheduler

https://wiki.openstack.org/wiki/Mistral/Cloud_Cron_details

The first one proposes and describes in some detail a new service and API
strictly for scheduling the invocation of webhooks.

The second one describes a part of Mistral (in less detail) to basically do
the same, except executing taskflows directly.

Here's the first question: should scalable cloud scheduling exist strictly
as a feature of Mistral, or should it be a separate API that only does
event scheduling? Mistral could potentially make use of the event
scheduling API (or just rely on users using that API directly to get it to
execute their task flows).

Second question: if the proposed "EventScheduler" becomes a real project,
which OpenStack Program should it live under?

Third question: Is anyone actively working on this stuff? :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-12 Thread Christopher Armstrong
On Tue, Nov 12, 2013 at 10:32 AM, Clint Byrum  wrote:

> Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
> >
> > Hi all,
> >
> > I have just posted the following wiki page to reflect a refined proposal
> > for HOT software configuration based on discussions at the design summit
> > last week. Angus also put a sample up in an etherpad last week, but we
> did
> > not have enough time to go thru it in the design session. My write-up is
> > based on Angus' sample, actually a refinement, and on discussions we had
> in
> > breaks, plus it is trying to reflect all the good input from ML
> discussions
> > and Steve Baker's initial proposal.
> >
> > https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
> >
> > Please review and provide feedback.
>
> Hi Thomas, thanks for spelling this out clearly.
>
> I am still -1 on anything that specifies the place a configuration is
> hosted inside the configuration definition itself. Because configurations
> are encapsulated by servers, it makes more sense to me that the servers
> (or server groups) would specify their configurations. If changing to a
> more logical model is just too hard for TOSCA to adapt to, then I suggest
> this be an area that TOSCA differs from Heat. We don't need two models
> for communicating configurations to servers, and I'd prefer Heat stay
> focused on making HOT template authors' and users' lives better.
>
>

I agree that the specification of which configs go on which servers should
be separated from both. This is how the good configuration tools like
puppet work anyway: you specify your servers, you specify ways to configure
them, and you specify which servers get which configs, all in potentially
separate places for maximum flexibility.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Do we need to clean up resource_id after deletion?

2013-11-12 Thread Christopher Armstrong
On Tue, Nov 12, 2013 at 7:47 AM, Zane Bitter  wrote:

> On 02/11/13 05:30, Clint Byrum wrote:
>
>> Excerpts from Christopher Armstrong's message of 2013-11-01 11:34:56
>> -0700:
>>
>>> Vijendar and I are trying to figure out if we need to set the resource_id
>>> of a resource to None when it's being deleted.
>>>
>>> This is done in a few resources, but not everywhere. To me it seems
>>> either
>>>
>>> a) redundant, since the resource is going to be deleted anyway (thus
>>> deleting the row in the DB that has the resource_id column)
>>> b) actively harmful to useful debuggability, since if the resource is
>>> soft-deleted, you'll not be able to find out what physical resource it
>>> represented before it's cleaned up.
>>>
>>> Is there some specific reason we should be calling resource_id_set(None)
>>> in
>>> a check_delete_complete method?
>>>
>>>
>> I've often wondered why some do it, and some don't.
>>
>> Seems to me that it should be done not inside each resource plugin but
>> in the generic resource handling code.
>>
>> However, I have not given this much thought. Perhaps others can provide
>> insight into why it has been done that way.
>>
>
> There was a time in the very early days of Heat development when deleting
> something that had already disappeared usually resulted in an error (i.e.
> we mostly weren't catching NotFound exceptions). I expect this habit dates
> from that era.
>
> I can't think of any reason we still need this, and I agree that it seems
> unhelpful for debugging.
>
> cheers,
> Zane.
>
>
Thanks Zane and others who have responded. My recent patch (now already
merged) won't delete the resource_id.


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Policy on spelling and grammar

2013-11-11 Thread Christopher Armstrong
On Mon, Nov 11, 2013 at 12:19 PM, Clint Byrum  wrote:

> Excerpts from David Kranz's message of 2013-11-11 09:58:59 -0800:
> > I have seen a wide variety of -1'ing (and in many cases approving)
> > patches for minor spelling or grammatical errors and think we need a
> > policy about this. Given the large number of contributors for whom
> > English is not their native language, I would be in favor of rejecting
> > spelling errors in variable or method names but being more lenient in
> > comments, commit messages, READMEs, etc. What do you all think?
> >
>
> The point of code review is to find defects. Misspelled words are defects
> in the English language encoded in the change. In fact, commit messages
> in particular are critical to get right as they cannot ever be fixed,
> and they are generally the most useful when under a stressful situation
> trying to determine the nature of a regression.
>
> Many of our contributors are also newbies to python, and we do not let
> them get away with obvious mistakes in python code. English is just a
> language with a different interpreter (a more forgiving one, for sure,
> but also one with many versions at various stages of implementation).
>
> In fact, our large percentage of non-native english speakers is a reason
> to be extremely careful about grammar and spelling so as not to confuse
> non-native speakers with incorrect grammar and spelling.
>
> I believe that if a -1 for a spelling mistake is causing more than an
> extremely short turn around time then either the submitter is not engaged
> with the project and thus not responsive to the -1, or the reviewers
> are over-taxed and the given project needs more reviewers.
>
>

It would be so much nicer if there were some easy way for the reviewer
himself to fix the typos directly (in a way that can trivially be accepted
by the submitter of the patch into his own patch -- with a click of a
button).


-- 
Christopher Armstrong
http://radix.twistedmatrix.com/
http://planet-if.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-06 Thread Christopher Armstrong
On Wed, Nov 6, 2013 at 2:34 AM, Radomir Dopieralski
wrote:

> Hello,
>
> I'm quite new in the OpenStack project, but I love it already. What is
> especially nifty is the automated review system -- I'm really impressed.
> I'm coming from a project in which we also did reviews of every change
> -- although it was mostly manual, and just one review was enough to
> merge -- and at some point in that project I noticed that it is very
> easy to give reviews that are unhelpful, frustrating and just get in the
> way of the actual work. I started paying attention to how I am reviewing
> code, and I managed to come up with several patterns that are bad. Once
> I know the patterns, it's easier to recognize when I'm doing something
> wrong and rethink the review. I would like to share the patterns that I
> noticed.
>
>

Agreed on all points. I think Gerrit is nice in that it automates a lot of
stuff, but unfortunately the workflow has not encouraged the best behavior
for reviewers. This is a good list to follow -- but how can we be sure
people will? This mailing list thread will only be seen by a small number
of reviewers over the life of the project, I'm sure.


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Create a source repo for the API specification?

2013-11-02 Thread Christopher Armstrong
On Sat, Nov 2, 2013 at 4:39 PM, Jay Pipes  wrote:

> Hi all,
>
> One of the most important aspects in the early stages of Solum development
> will be the consensus building and stabilization of the Solum API
> specification. A solid API spec aid in the speeding up the pace of
> innovation in the Solum contributor community.
>
> One of the aspects of the Keystone development process that I think is a
> big benefit is the separate source repository that stores the OpenStack
> Identity API specifications:
>
> https://github.com/openstack/identity-api
>
> When new versions of the API specification are debated or new extensions
> are proposed, patches are made to the specification markdown documents and
> reviewed in the exact same manner that regular code is on the
> https://review.openstack.org Gerrit site. Contributors are free to
> annotate the proposed changes to the API specification in the same way that
> they would make inline code comments on a regular code review. Here's an
> example for a proposed change that I recently made:
>
> https://review.openstack.org/#/c/54215/10
>
> I'd like to propose that Solum do the same: have a separate source
> repository for the API specification.
>
> Thoughts?
> -jay
>


I like this idea. I'd also propose that the format of the specification be
something machine-readable, such as API-Blueprint (a simple subset of
markdown, apiblueprint.org, also what apiary uses, if you've ever seen
that) or RAML (a more structured YAML-based syntax, raml.org).
API-Blueprint is closer to what the keystone document uses.

Making the documentation machine-readable means that it's much easier to
verify that, in practice, the implementation of an API matches its
specification and documentation, which is a problem that plagues many
OpenStack projects right now.

--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Do we need to clean up resource_id after deletion?

2013-11-01 Thread Christopher Armstrong
Vijendar and I are trying to figure out if we need to set the resource_id
of a resource to None when it's being deleted.

This is done in a few resources, but not everywhere. To me it seems either

a) redundant, since the resource is going to be deleted anyway (thus
deleting the row in the DB that has the resource_id column)
b) actively harmful to useful debuggability, since if the resource is
soft-deleted, you'll not be able to find out what physical resource it
represented before it's cleaned up.

Is there some specific reason we should be calling resource_id_set(None) in
a check_delete_complete method?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Curvature and Donabe repos are now public!

2013-10-03 Thread Christopher Armstrong
On Thu, Oct 3, 2013 at 1:43 PM, Debojyoti Dutta  wrote:

> Hi!
>
> We @Cisco just made the following repos public
> https://github.com/CiscoSystems/donabe
> https://github.com/CiscoSystems/curvature
>
> Donabe was pitched as a recursive container before Heat days.
> Curvature is an alternative interactive GUI front end to openstack
> that can handle virtual resources, templates and can instantiate
> Donabe workloads. The D3 + JS stuff was incorporated into Horizon. A
> short demo was shown last summit and can be found at
>
> http://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe
>
> Congrats to the primary developers: @CaffeinatedBrad @John_R_Davidge
> @Tehsmash_ @JackPeterFletch ... Special thanks to @lewtucker for
> supporting this.
>
> Hope this leads to more cool stuff for the Openstack community!
>
>

Congrats! I'm glad you guys finally released the code :)

Does Cisco (or anyone else) plan to continue to put development resources
into these projects, or should we basically view them as reference code for
solving particular problems?

Thanks,


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-20 Thread Christopher Armstrong
Hi Mike,

I have a *slightly* better idea of the kind of stuff you're talking about,
but I think it would really help if you could include some concrete
real-world use cases and describe why a holistic scheduler inside of Heat
is necessary for solving them.


On Fri, Sep 20, 2013 at 2:13 AM, Mike Spreitzer  wrote:

> I have written a new outline of my thoughts, you can find it at
> https://docs.google.com/document/d/1RV_kN2Io4dotxZREGEks9DM0Ih_trFZ-PipVDdzxq_E
>
> It is intended to stand up better to independent study.  However, it is
> still just an outline.  I am still learning about stuff going on in
> OpenStack, and am learning and thinking faster than I can write.  Trying to
> figure out how to cope.
>
> Regards,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-09-20 Thread Christopher Armstrong
Hello Simon! I've put responses below.

On Tue, Sep 17, 2013 at 7:57 AM, Simon Pasquier 
wrote:
> Hello,
>
> I'm testing stack updates with instance group and wait conditions and I'd
> like to get feedback from the Heat community.
>
> My template declares an instance group resource with size = N and a wait
> condition resource with count = N (N being passed as a parameter of the
> template). Each group's instance is calling cfn-signal (with a different
> id!) at the end of the user data script and my stack creates with no
error.
>
> Now when I update my stack to run N+X instances, the instance group gets
> updated with size=N+X but since the wait condition is deleted and
recreated,
> the count value should either be updated to X or my existing instances
> should re-execute cfn-signal.

This is a pretty interesting scenario; I don't think we have a very good
solution for it yet.

> To cope with this situation, I've found 2 options:
> 1/ declare 2 parameters in my template: nb of instances (N for creation,
N+X
> for update) and count of wait conditions (N for creation, X for update).
See
> [1] for the details.
> 2/ declare only one parameter in my template (the size of the group) and
> leverage cfn-hup on the existing instances to re-execute cfn-signal. See
[2]
> for the details.
>
> The solution 1 is not really user-friendly and I found that solution 2 is
a
> bit complicated. Does anybody know a simpler way to achieve the same
result?


I definitely think #1 is better than #2, but you're right, it's also not
very nice.

I'm kind of confused about your examples though, because you don't show
anything that depends on ComputeReady in your template. I guess I can
imagine some scenarios, but it's not very clear to me how this works. It'd
be nice to make sure the new autoscaling solution that we're working on
will support your case in a nice way, but I think we need some more
information about what you're doing. The only time this would have an
effect is if there's another resource depending on the ComputeReady *that's
also being updated at the same time*, because the only effect that a
dependency has is to wait until it is met before performing create, update,
or delete operations on other resources. So I think it would be nice to
understand your use case a little bit more before continuing discussion.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-19 Thread Christopher Armstrong
Hi Michael! Thanks for this summary. There were some minor
inaccuracies, but I appreciate you at least trying when I should have
summarized it earlier. I'll give some feedback inline.

First, though, I have recently worked a lot on the wiki page for the
blueprint. It's available here:

https://wiki.openstack.org/wiki/Heat/AutoScaling

It still might need a little bit more cleaning up and probably a more
holistic example, but it should be pretty close now. I will say that I
changed it to specify the Heat resources for using autoscale instead
of the APIs of the AS API mostly for convenience because they're
easily specifiable. The AS API should be derived pretty obviously from
the resources.

On Thu, Sep 19, 2013 at 6:35 AM, Mike Spreitzer  wrote:
> I'd like to try to summarize this discussion, if nothing else than to see
> whether I have correctly understood it.  There is a lot of consensus, but I
> haven't heard from Adrian Otto since he wrote some objections.  I'll focus
> on trying to describe the consensus; Adrian's concerns are already collected
> in a single message.  Or maybe this is already written in some one place?

Yeah. Sorry I didn't link that wiki page earlier; it was in a pretty
raw and chaotic form.

> The consensus is that there should be an autoscaling (AS) service that is
> accessible via its own API.  This autoscaling service can scale anything
> describable by a snippet of Heat template (it's not clear to me exactly what
> sort of syntax this is; is it written up anywhere?).

Yes. See the wiki page above; it's basically just a mapping exactly
like the "Resources" section in a typical Heat template. e.g.

{..., "Resources": {"mywebserver": {"Type": "OS::Nova::Server"}, ...}}

> The autoscaling
> service is stimulated into action by a webhook call.  The user has the
> freedom to arrange calls on that webhook in any way she wants.  It is
> anticipated that a common case will be alarms raised by Ceilometer.  For
> more specialized or complicated logic, the user is free to wire up anything
> she wants to call the webhook.

This is accurate.

> An instance of the autoscaling service maintains an integer variable, which
> is the current number of copies of the thing being autoscaled.  Does the
> webhook call provide a new number, or +1/-1 signal, or ...?

The webhook provides no parameters. The amount of change is encoded
into the policy that the webhook is associated with. Policies can
change it the same way they can in current AWS-based autoscaling: +/-
fixed number, or +/- percent, or setting it to a specific number
directly.


> There was some discussion of a way to indicate which individuals to remove,
> in the case of decreasing the multiplier.  I suppose that would be an option
> in the webhook, and one that will not be exercised by Ceilometer alarms.

I don't think the webhook is the right place to do that. That should
probably be a specific thing in the AS API.

> (It seems to me that there is not much "auto" in this autoscaling service
> --- it is really a scaling service driven by an external controller.  This
> is not a criticism, I think this is a good factoring --- but maybe not the
> best naming.)

I think the policies are what qualify it for the "auto" term. You can
have webhook policies or schedule-based policies (and maybe more
policies in the future). The policies determine how to change the
group.

> The autoscaling service does its job by multiplying the heat template
> snippet (the thing to be autoscaled) by the current number of copies and
> passing this derived template to Heat to "make it so".  As the desired
> number of copies changes, the AS service changes the derived template that
> it hands to Heat.  Most commentators argue that the consistency and
> non-redundancy of making the AS service use Heat outweigh the extra
> path-length compared to a more direct solution.

Agreed.

> Heat will have a resource type, analogous to
> AWS::AutoScaling::AutoScalingGroup, through which the template author can
> request usage of the AS service.

Yes.

> OpenStack in general, and Heat in particular, need to be much better at
> traceability and debuggability; the AS service should be good at these too.

Agreed.

> Have I got this right?

Pretty much! Thanks for the summary :-)

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Christopher Armstrong
I apologize that this mail will appear at the incorrect position in
the thread, but I somehow got unsubscribed from openstack-dev due to
bounces and didn't receive the original email.

On 9/11/13 03:15 UTC, Adrian Otto  wrote:
> So, I don't intend to argue the technical minutia of each design point, but I 
> challenge you to make sure that we
> (1) arrive at a simple system that any OpenStack user can comprehend,

I think there is tension between simplicity of the stack and
simplicity of the components in that stack. We're making sure that the
components will be simple, self-contained, and easy to understand, and
the stack will need to plug them together in an interesting way.

> (2) responds quickly to alarm stimulus,

Like Zane, I don't really buy the argument that the API calls to Heat
will make any significant impact on the speed of autoscaling. There
are MUCH bigger wins in e.g. improving the ability for people to use
cached, pre-configured images vs a couple of API calls. Even once
you've optimized all of that, booting an instance still takes much,
much longer than running the control code.

> (3) is unlikely to fail,

I know this isn't exactly what you mentioned, but I have some things
to say not about resilience but instead about reaction to failures.

The traceability and debuggability of errors is something that
unfortunately plagues all of OpenStack, both for developers and
end-users. It is fortunate that OpenStack compononts make good use of
each other, but unfortunate that

1. there's no central, filterable logging facility (without investing
significant ops effort to deploy a third-party one yourself);
2. not enough consistent "tagging" of requests throughout the system
that allows operators looking at logs to understand how a user's
original request led to some ultimate error;
3. no ubiquitous mechanisms for propagating errors between service
APIs in a way that ultimately lead back to the consumer of the
service;
4. many services don't even report detailed information with errors
that happen internally.

I believe we'll have to do what we can, especially in #3 and #4, to
make sure that the users of autoscaling and Heat have good visibility
into the system when errors occur.

> (4) can be easily customized with user-supplied logic that controls how the 
> scaling happens, and under what conditions.

I think this is a good argument for using Heat for the scaling
resources instead of doing it separately. One of the biggest new
features that the new AS design provides is the ability to scale *any*
resource, not just AWS::EC2::Instance. This means you can write your
own custom resource with custom logic and scale it trivially. Doing it
in terms of resources instead of launch configurations provides a lot
of flexibility, and a Resource implementation is a nice way to wrap up
that custom logic. If we implemented this in the AS service without
using Heat, we'd either be constrained to nova instances again, or
have to come up with our own API for customization.

As far as customizing the conditions under which scaling happens,
that's provided at the lowest common denominator by providing a
webhook trigger for scaling policies (on top of which will be
implemented convenient Ceilometer integration support).  Users will be
able to provide their own logic and hit the webhook whenever they want
to execute the policy.


> It would be better if we could explain Autoscale like this:

> Heat -> Autoscale -> Nova, etc.
> -or-
> User -> Autoscale -> Nova, etc.

> This approach allows use cases where (for whatever reason) the end user does 
> not want to use Heat at all, but still wants something simple to be 
> auto-scaled for them. Nobody would be scratching their heads wondering why 
> things are going in circles.

The "Heat" behind "Autoscale" isn't something that the *consumer* of
the service knows about, only the administrator. Granted, the API
design that we're running with *does* currently require the user to
provide "snippets" of heat resource templates -- just specifying the
individual resources that should be scaled -- but I think it would be
trivial to support an alternative type of "launch configuration" that
does the translation to heat templates in the background, if we really
want to hide all the Heatiness from a user who just wants the
simplicity of knowing only about Nova and autoscaling.

To conclude, I'd like to just say I basically agree with what Clint,
Keith, and Steven have said in other messages in this thread. I
doesn't appear that the design of Heat autoscaling (informed by Zane,
Clint, Angus and others) fails to meet the criteria you've brought up.

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-19 Thread Christopher Armstrong
On Fri, Aug 16, 2013 at 1:35 PM, Clint Byrum  wrote:

> Excerpts from Zane Bitter's message of 2013-08-16 09:36:23 -0700:
> > On 16/08/13 00:50, Christopher Armstrong wrote:
> > > *Introduction and Requirements*
> > >
> > > So there's kind of a perfect storm happening around autoscaling in Heat
> > > right now. It's making it really hard to figure out how I should
> compose
> > > this email. There are a lot of different requirements, a lot of
> > > different cool ideas, and a lot of projects that want to take advantage
> > > of autoscaling in one way or another: Trove, OpenShift, TripleO, just
> to
> > > name a few...
> > >
> > > I'll try to list the requirements from various people/projects that may
> > > be relevant to autoscaling or scaling in general.
> > >
> > > 1. Some users want a service like Amazon's Auto Scaling or Rackspace's
> > > Otter -- a simple API that doesn't really involve orchestration.
> > > 2. If such a API exists, it makes sense for Heat to take advantage of
> > > its functionality instead of reimplementing it.
> >
> > +1, obviously. But the other half of the story is that the API is likely
> > be implemented using Heat on the back end, amongst other reasons because
> > that implementation already exists. (As you know, since you wrote it ;)
> >
> > So, just as we will have an RDS resource in Heat that calls Trove, and
> > Trove will use Heat for orchestration:
> >
> >user => [Heat =>] Trove => Heat => Nova
> >
> > there will be a similar workflow for Autoscaling:
> >
> >user => [Heat =>] Autoscaling -> Heat => Nova
> >
>
> After a lot of consideration and an interesting IRC discussion, I think
> the point above makes it clear for me. Autoscaling will have a simpler
> implementation by making use of Heat's orchestration capabilities,
> but the fact that Heat will also use autoscaling is orthogonal to that.
>
> That does beg the question of why this belongs in Heat. Originally
> we had taken the stance that there must be only one control system,
> lest they have a policy-based battle royale. If we only ever let
> autoscaled resources be controlled via Heat (via nested stack produced
> by autoscaling), then there can be only one.. control service (Heat).
>
> By enforcing that autoscaling always talks to "the world" via Heat though,
> I think that reaffirms for me that autoscaling, while not really the same
> project (seems like it could happily live in its own code tree), will
> be best served by staying inside the "OpenStack Orchestration" program.
>
> The question of private RPC or driving it via the API is not all that
> interesting to me. I do prefer the SOA method and having things talk via
> their respective public APIs as it keeps things loosely coupled and thus
> easier to fit into one's brain and debug/change.
>
>
I agree with using only public APIs. I have managed to fit this model of
autoscaling managing a completely independent Heat stack into my brain, and
I am willing to take it and run with it.

Thanks to Zane and Clint for hashing this out with me in a 2-hour IRC
design discussion, it was incredibly helpful :-)

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-16 Thread Christopher Armstrong
On Thu, Aug 15, 2013 at 6:39 PM, Randall Burt wrote:

>
> On Aug 15, 2013, at 6:20 PM, Angus Salkeld  wrote:
>
> > On 15/08/13 17:50 -0500, Christopher Armstrong wrote:
>
> >> 2. There should be a new custom-built API for doing exactly what the
> >> autoscaling service needs on an InstanceGroup, named something
> unashamedly
> >> specific -- like "instance-group-adjust".
> >>
> >> Pros: It'll do exactly what it needs to do for this use case; very
> little
> >> state management in autoscale API; it lets Heat do all the orchestration
> >> and only give very specific delegation to the external autoscale API.
> >>
> >> Cons: The API grows an additional method for a specific use case.
> >
> > I like this one above:
> > adjust(new_size, victim_list=['i1','i7'])
> >
> > So if you are reducing the new_size we look in the victim_list to
> > choose those first. This should cover Clint's use case as well.
> >
> > -Angus
>
> We could just support victim_list=[1, 7], since these groups are
> collections of identical
> resources. Simple indexing should be sufficient, I would think.
>
> Perhaps separating the stimulus from the actions to take would let us
> design/build toward different policy implementations. Initially, we could
> have a HeatScalingPolicy that works with the signals that a scaling group
> can handle. When/if AS becomes an API outside of Heat, we can implement a
> fairly simple NovaScalingPolicy that includes the args to pass to nova boot.
>
>

I don't agree with using indices. I'd rather use the actual resource IDs.
For one, indices can change out from under you. Also, figuring out the
index of the instance you want to kill is probably an additional step most
of the time you actually care about destroying specific instances.



> >> 3. the autoscaling API should update the "Size" Property of the
> >> InstanceGroup resource in the stack that it is placed in. This would
> >> require the ability to PATCH a specific piece of a template (an
> operation
> >> isomorphic to update-stack).
>
> I think a PATCH semantic for updates would be generally useful in terms of
> "quality of life" for API users. Not having to pass the complete state and
> param values for trivial updates would be quite nice regardless of its
> implications to AS.
>

Agreed.



-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-15 Thread Christopher Armstrong
*Introduction and Requirements*

So there's kind of a perfect storm happening around autoscaling in Heat
right now. It's making it really hard to figure out how I should compose
this email. There are a lot of different requirements, a lot of different
cool ideas, and a lot of projects that want to take advantage of
autoscaling in one way or another: Trove, OpenShift, TripleO, just to name
a few...

I'll try to list the requirements from various people/projects that may be
relevant to autoscaling or scaling in general.

1. Some users want a service like Amazon's Auto Scaling or Rackspace's
Otter -- a simple API that doesn't really involve orchestration.
2. If such a API exists, it makes sense for Heat to take advantage of its
functionality instead of reimplementing it.
3. If Heat integrates with that separate API, however, that API will need
two ways to do its work:
   1. native instance-launching functionality, for the "simple" use
   2. a way to talk back to Heat to perform orchestration-aware scaling
operations.
4. There may be things that are different than AWS::EC2::Instance that we
would want to scale (I have personally been playing around with the concept
of a ResourceGroup, which would maintain a nested stack of resources based
on an arbitrary template snippet).
5. Some people would like to be able to perform manual operations on an
instance group -- such as Clint Byrum's recent example of "remove instance
4 from resource group A".

Please chime in with your additional requirements if you have any! Trove
and TripleO people, I'm looking at you :-)


*TL;DR*

Point 3.2. above is the main point of this email: exactly how should the
autoscaling API talk back to Heat to tell it to add more instances? I
included the other points so that we keep them in mind while considering a
solution.

*Possible Solutions*

I have heard at least three possibilities so far:

1. the autoscaling API should maintain a full template of all the nodes in
the autoscaled nested stack, manipulate it locally when it wants to add or
remove instances, and post an update-stack to the nested-stack associated
with the InstanceGroup.

Pros: It doesn't require any changes to Heat.

Cons: It puts a lot of burden of state management on the autoscale API, and
it arguably spreads out the responsibility of "orchestration" to the
autoscale API. Also arguable is that automated agents outside of Heat
shouldn't be managing an "internal" template, which are typically developed
by devops people and kept in version control.

2. There should be a new custom-built API for doing exactly what the
autoscaling service needs on an InstanceGroup, named something unashamedly
specific -- like "instance-group-adjust".

Pros: It'll do exactly what it needs to do for this use case; very little
state management in autoscale API; it lets Heat do all the orchestration
and only give very specific delegation to the external autoscale API.

Cons: The API grows an additional method for a specific use case.

3. the autoscaling API should update the "Size" Property of the
InstanceGroup resource in the stack that it is placed in. This would
require the ability to PATCH a specific piece of a template (an operation
isomorphic to update-stack).

Pros: The API modification is generic, simply a more optimized version of
update-stack; very little state management required in autoscale API.

Cons: This would essentially require manipulating the user-provided
template.  (unless we have a concept of "private properties", which perhaps
wouldn't appear in the template as provided by the user, but can be
manipulated with such an update stack operation?)


*Addenda*

Keep in mind that there are use cases which require other types of
manipulation of the InstanceGroup -- not just the autoscaling API. For
example, see Clint's #5 above.


Also, about implementation: Andrew Plunk and I have begun work on Heat
resources for Rackspace's Otter, which I think will be a really good proof
of concept for how this stuff should work in the Heat-native autoscale API.
I am trying to gradually work the design into the native Heat autoscaling
design, and we will need to solve the autoscale-controlling-InstanceGroup
issue soon.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-06 Thread Christopher Armstrong
On Tue, Aug 6, 2013 at 6:32 AM, Sean Dague  wrote:

>
> The reason we go hard and fast on certain rules is to reduce review time
> by people. If something is up for debate we get bikeshedding in reviews
> where one reviewer tells someone to do it one way, 2 days later they update
> their review, another reviewer comes in and tells them to do it the
> otherway. (This is not theoretical, it happens quite often, if you do a lot
> of reviews you see it all the time.) It also ends up being something
> reviewers can stop caring about, because the machine will pick it up.
> Giving them the ability to focus on higher order issues, and still keeping
> the code from natural entropy.
>
> MUST == computer can do it, less work for core review time (which is
> realistically one of our most constrained resources in OpenStack)
> MAY == humans have to make a judgement call, which means more work for our
> already constrained review teams
>
> I've found H302 to really be useful on reviewing large chunks of code I've
> not been in much before. And get seriously annoyed being in projects that
> don't have it enforced yet (tempest is guilty of that). Being able to
> quickly know what namespace things are out of saves time.
>


I think it's really unfortunate that people will block patches based on
stylistic concerns. The answer, IMO, is to codify in policy that stylistic
issues *cannot* block a patch from landing.

I recommend having humility in our reviews. Instead of

"This bike shed needs to be painted red. -1"

One should say

"I prefer red for the color of bike sheds. You can do that if you want, but
go ahead and merge anyway if you don't want to. +0"

and don't mark a review as -1 if it *only* has bikeshedding in it. I would
love to see a culture of reviewing that emphasizes functional correctness,
politeness, and mutual education.

And given the rationale from Robert Collins, I agree that the module-import
thing should be one of the flakes that allows exceptions.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-03 Thread Christopher Armstrong
On Tue, Jul 2, 2013 at 11:38 PM, Robert Collins
 wrote:
> Radix points out I missed the naunce that you're targeting the users
> of python-novaclient, for instance, rather than python-novaclient's
> own tests.
>
>
> On 3 July 2013 16:29, Robert Collins  wrote:
>
>>> What I'd like is for each client library, in addition to the actual
>>> implementation, is that they ship a fake, in-memory, version of the API. The
>>> fake implementations should take the same arguments, have the same return
>>> values, raise the same exceptions, and otherwise be identical, besides the
>>> fact
>>> that they are entirely in memory and never make network requests.
>>
>> So, +1 on shipping a fake reference copy of the API.
>>
>> -1 on shipping it in the client.
>>
>> The server that defines the API should have two implementations - the
>> production one, and a testing fake. The server tests should exercise
>> *both* code paths [e.g. using testscenarios] to ensure there is no
>> skew between them.
>>
>> Then the client tests can be fast and efficient but not subject to
>> implementation skew between fake and prod implementations.
>>
>> Back on Launchpad I designed a similar thing, but with language
>> neutrality as a goal :
>> https://dev.launchpad.net/ArchitectureGuide/ServicesRequirements#Test_fake
>>
>> And in fact, I think that that design would work well here, because we
>> have multiple language bindings - Python, Ruby, PHP, Java, Go etc, and
>> all of them will benefit from a low(ms or less)-latency test fake.
>
> So taking the aspect I missed into account I'm much happier with the
> idea of shipping a fake in the client, but... AFAICT many of our
> client behaviours are only well defined in the presence of a server
> anyhow.
>
> So it seems to me that a fast server fake can be used in tests of
> python-novaclient, *and* in tests of code using python-novaclient
> (including for instance, heat itself), and we get to write it just
> once per server, rather than once per server per language binding.
>
> -Rob


I want to make sure I understond you. Let's say I have a program named
cool-cloud-tool, and it uses python-novaclient, python-keystoneclient,
and three other clients for OpenStack services. You're suggesting that
its test suite should start up instances of all those OpenStack
services with in-memory or otherwise localized backends, and
communicate with them using standard python-*client functionality?

I can imagine that being a useful thing, if it's very easy to do, and
won't increase my test execution time too much.

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-02 Thread Christopher Armstrong
On Tue, Jul 2, 2013 at 11:10 AM, Kurt Griffiths
 wrote:
> The idea has merit; my main concern is that we would be duplicating
> significant chunks of code/logic between the fakes and the real services.
>
> How can we do this in a DRY way?


I've done it a few different ways for libraries I've worked on.

Usually, the fakes don't actually duplicate much code from the real
implementation. But in the cases they do, I've had situations like
this:


class RealImplementation(object):

  def do_network_stuff(self, stuff):
...

  def low_level_operation(self):
return self.do_network_stuff("GET /integer")

  def high_level_operation(self):
return self.low_level_operation() + 5


I'd just create a subclass like this:

class FakeImplementation(RealImplementation):

  def do_network_stuff(self, stuff):
raise NotImplementedError("This should never be called!")

  def low_level_operation(self):
return self.integer # or however you implement your fake


This has two interesting properties:

1. I don't have to reimplement the high_level_operation
2. If I forget to implement a fake version of some method that invokes
do_network_stuff, then it will blow up with a NotImplementedError so
my test doesn't accidentally do real network stuff.


This is just an example from some recent work I did on a simple RPC
client with an HTTP API (unrelated to OpenStack client libraries), but
that just so happens to be the case that Alex is discussing, so I
think it can work well.

--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Does it make sense to have a resource-create API?

2013-06-19 Thread Christopher Armstrong
[lots of points about resource manipulation APIs, templates, autoscaling
design, and so forth...]

I'm glad so many people got involved in this thread :-) I really appreciate
the feedback. I think we can continue with autoscale design work without
relying on resource manipulation APIs. There are certainly many other
subjects about autsocale design that we need to thrash out, but I think we
can start separate threads for those other issues.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Does it make sense to have a resource-create API?

2013-06-18 Thread Christopher Armstrong
On Tue, Jun 18, 2013 at 11:01 PM, Adrian Otto wrote:

> On Jun 18, 2013, at 6:09 PM, Angus Salkeld  wrote:
> > To me one of the most powerful and apealing things of Heat is the
> > ability to reproducibly re-create a stack from a template. This
> > new public API is going to make this difficult.
>
> Adding an API only makes it difficult if you decide to bypass templates
> and use the API. You can still be disciplined and keep your templates
> updated to achieve the reproducible goal. Yes, and API of this sort is a
> sharp instrument, but it can be useful if applied properly.
>

It seems we could trivialize the task of keeping your template up to date
by providing an API for fetching a template that reflects the current
stack. Does that sound sensible given the current direction of the design
of Heat?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Does it make sense to have a resource-create API?

2013-06-18 Thread Christopher Armstrong
Hi Adrian, thanks for the response. I'll just respond to one thing
right now (since it's way after hours for me ;)

On Tue, Jun 18, 2013 at 6:32 PM, Adrian Otto  wrote:
> On Jun 18, 2013, at 3:44 PM, Christopher Armstrong 
> 
>  wrote:
>
>> tl;dr POST /$tenant/stacks/$stack/resources/ ?
>
> Yes.

[snip]

>> This is basically the gist of the question. I believe the answer
>> should be the same as the answer about any other type of resource we
>> might want to manipulate through the API -- it seems best that either
>> all resource types are manipulated through a generic resource
>> manipulation API, or they should all have their own specific ReST
>> collection.
>
> Give them specific collections, so they can be easily specialized.


These two points are contradictory, aren't they? The main point of my
email was trying to decide between the two -- either create the
autoscaling resources by POSTing to a generic "resources" collection,
or by POSTing to specific URLs that represent the *type* of resource
I'm creating.

(it seems like the idea of creating these resources in the Heat stack
*at all* is under debate as well, but I just wanted to address this
one point in this email).


--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Does it make sense to have a resource-create API?

2013-06-18 Thread Christopher Armstrong
tl;dr POST /$tenant/stacks/$stack/resources/ ?


== background ==

While thinking about the Autoscaling API, Thomas Hervé and I had the
following consideration:

- autoscaling is implemented as a set of Heat Resources
- there are already general APIs for looking at resources generically:
  - resource-show (GET /$tenant/stacks/$stack/resources/$id)
  - resource-metadata (GET /$tenantt/stacks/$stack/resources/$id/metadata)
  - resource-list (GET /$tenant/stacks/$stack/resources/)
- we want to be able to create and configure autoscaling resources
through the API
- maybe we should implement POST for resources?

This is basically the gist of the question. I believe the answer
should be the same as the answer about any other type of resource we
might want to manipulate through the API -- it seems best that either
all resource types are manipulated through a generic resource
manipulation API, or they should all have their own specific ReST
collection.

Actually, I could also imagine a situation where only generic
operations on common resource metadata are allowed via
/$tenant/stacks/$stack/resources/, and resource-specific manipulation
is done via resource-specific collections -- I don't know how ReSTy
that is, though.

I'll get to specifics. There are two ways I can imagine the autoscale
API looking. I'll avoid the word "resource" when referring to ReST
resources and just talk about "collections" and "paths", since
"resource" in this context also means Heat resources.

== resource-specific paths ==

One is basically just like Otter's: http://docs.autoscale.apiary.io/

This provides paths like /$tenant/groups/$id (for an autoscaling
group), /$tenant/groups/$id/policies (for a policy), etc. These
variously support GET for reading as well as POST and PUT for
manipulation.

We can use "/v1.0/{tenantId}/groups/{groupId}/policies" as an example
operation. We POST JSON describing a new scaling policy to create to a
new scaling policy.

== generic paths ==

The alternative is to say that autoscaling groups, policies, etc are
all Just Heat Resources, and Heat resources already have a ReST
collection at /$tenant/resources/.

In this option, the alternative to POSTing to
/$tenant/groups/$id/policies would be to post directly to
/$tenant/resources/, with a body exactly like in the previous example,
but with two more JSON attributes:

- the type of the resource, in this case something like
"AWS::AutoScaling::ScalingPolicy"
- the group ID that the new policy should be associated with, since
it's not specified in the URL.

One concern I have is about how well we can specify a strict schema of
inputs and outputs to the resources/ collection -- I'm particular
interested in JSON hyperschema. I'm not sure how it handles
heterogeneous collections like this.

--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev