Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-25 Thread Zane Bitter

Top-posting *and* replying to myself today :D

I realised that I could have implemented this in less time than I've 
spent arguing about it already, so I did:


https://review.openstack.org/#/c/58357/
https://review.openstack.org/#/c/58358/

cheers,
Zane.


On 19/11/13 23:27, Zane Bitter wrote:

On 19/11/13 19:14, Christopher Armstrong wrote:

On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 16/11/13 11:15, Angus Salkeld wrote:

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter
zbit...@redhat.com mailto:zbit...@redhat.com wrote:

On 15/11/13 02:48, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld
asalk...@redhat.com mailto:asalk...@redhat.com
mailto:asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

 On 14/11/13 10:19 -0600, Christopher Armstrong
wrote:

http://docs.heatautoscale.__ap__iary.io/
http://apiary.io/

 http://docs.heatautoscale.__apiary.io/
http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the
proposed API for
 autoscaling.
 It's written in API-Blueprint format (which
is a simple subset
 of Markdown)
 and provides schemas for inputs and outputs
using JSON-Schema.
 The source
 document is currently at
https://github.com/radix/heat/raw/as-api-spike/
https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp



https://github.com/radix/__heat/raw/as-api-spike/__autoscaling.apibp

https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 


 Things we still need to figure out:

 - how to scope projects/domains. put them
in the URL? get them
 from the
 token?
 - how webhooks are done (though this
shouldn't affect the API
 too much;
 they're basically just opaque)

 Please read and comment :)


 Hi Chistopher

 In the group create object you have 'resources'.
 Can you explain what you expect in there? I
thought we talked at
 summit about have a unit of scaling as a nested
stack.

 The thinking here was:
 - this makes the new config stuff easier to
scale (config get
applied
 Â  per scaling stack)

 - you can potentially place notification
resources in the scaling
 Â  stack (think marconi message resource -
on-create it sends a
 Â  message)

 - no need for a launchconfig
 - you can place a LoadbalancerMember resource
in the scaling stack
 Â  that triggers the loadbalancer to add/remove
it from the lb.


 I guess what I am saying is I'd expect an api
to a nested stack.


Well, what I'm thinking now is that instead of
resources (a
mapping of
resources), just have resource, which can be the
template definition
for a single resource. This would then allow the
user to specify a
Stack
resource if they want to provide multiple resources.
How does that
sound?


My thought was this (digging into the implementation
here a bit):

- Basically, the autoscaling code works as it does now:
creates a
template
containing OS::Nova::Server resources (changed from
AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and
creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like
(I'm not 100%
sure
   

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Zane Bitter

On 20/11/13 23:49, Christopher Armstrong wrote:

On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com
mailto:zbit...@redhat.com mailto:zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:

thought we had a workable solution with the LoadBalancerMember
idea,
which you would use in a way somewhat similar to
CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it
handle the problem of wanting to notify an arbitrary service (i.e.
not necessarily a load balancer)?


It's been described in the autoscaling wiki page for a while, and I
thought the LBMember idea was discussed at the summit, but I wasn't
there to verify that :)

https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

Basically, the LoadBalancerMember resource (which is very similar to the
CinderVolumeAttachment) would be responsible for removing and adding IPs
from/to the load balancer (which is actually a direct mapping to the way
the various LB APIs work). Since this resource lives with the server
resource inside the scaling unit, we don't really need to get anything
_out_ of that stack, only pass _in_ the load balancer ID.


I see a couple of problems with this approach:

1) It makes the default case hard. There's no way to just specify a 
server and hook it up to a load balancer like you can at the moment. 
Instead, you _have_ to create a template (or template snippet - not 
really any better) to add this extra resource in, even for what should 
be the most basic, default case (scale servers behind a load balancer).


2) It relies on a plugin being present for any type of thing you might 
want to notify.


At summit and - to the best of my recollection - before, we talked about 
scaling a generic group of resources and passing notifications to a 
generic controller, with the types of both defined by the user. I was 
expecting you to propose something based on webhooks, which is why I was 
surprised not to see anything about it in the API. (I'm not prejudging 
that that is the way to go... I'm actually wondering if Marconi has a 
role to play here.)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Fox, Kevin M
There is a high priority approved blueprint for a Neutron PoolMember:
https://blueprints.launchpad.net/heat/+spec/loadballancer-pool-members

Thanks,
Kevin

From: Christopher Armstrong [chris.armstr...@rackspace.com]
Sent: Thursday, November 21, 2013 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com wrote:
On 20/11/13 23:49, Christopher Armstrong wrote:
On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com wrote:

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com 
mailto:zbit...@redhat.commailto:zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:

thought we had a workable solution with the LoadBalancerMember
idea,
which you would use in a way somewhat similar to
CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it
handle the problem of wanting to notify an arbitrary service (i.e.
not necessarily a load balancer)?


It's been described in the autoscaling wiki page for a while, and I
thought the LBMember idea was discussed at the summit, but I wasn't
there to verify that :)

https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

Basically, the LoadBalancerMember resource (which is very similar to the
CinderVolumeAttachment) would be responsible for removing and adding IPs
from/to the load balancer (which is actually a direct mapping to the way
the various LB APIs work). Since this resource lives with the server
resource inside the scaling unit, we don't really need to get anything
_out_ of that stack, only pass _in_ the load balancer ID.

I see a couple of problems with this approach:

1) It makes the default case hard. There's no way to just specify a server and 
hook it up to a load balancer like you can at the moment. Instead, you _have_ 
to create a template (or template snippet - not really any better) to add this 
extra resource in, even for what should be the most basic, default case (scale 
servers behind a load balancer).

We can provide a standard resource/template for this, LoadBalancedServer, to 
make the common case trivial and only require the user to pass parameters, not 
a whole template.


2) It relies on a plugin being present for any type of thing you might want to 
notify.

I don't understand this point. What do you mean by a plugin? I was assuming 
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the 
actual name) would become a standard Heat resource, not a third-party thing 
(though third parties could provide their own through the usual heat extension 
mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a pretty 
standard design).


At summit and - to the best of my recollection - before, we talked about 
scaling a generic group of resources and passing notifications to a generic 
controller, with the types of both defined by the user. I was expecting you to 
propose something based on webhooks, which is why I was surprised not to see 
anything about it in the API. (I'm not prejudging that that is the way to go... 
I'm actually wondering if Marconi has a role to play here.)


I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest of 
our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind 
notifications or how they would work to solve our problems. You can keep saying 
that the notifications idea will solve our problems, but I can't figure out how 
it would solve our problem unless someone actually explains it :)


--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter zbit...@redhat.com wrote:

 On 20/11/13 23:49, Christopher Armstrong wrote:

 On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:

 On 20/11/13 16:07, Christopher Armstrong wrote:

 On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com
 mailto:zbit...@redhat.com mailto:zbit...@redhat.com wrote:

  On 19/11/13 19:14, Christopher Armstrong wrote:

 thought we had a workable solution with the LoadBalancerMember
 idea,
 which you would use in a way somewhat similar to
 CinderVolumeAttachment
 in the above example, to hook servers up to load balancers.


 I haven't seen this proposal at all. Do you have a link? How does it
 handle the problem of wanting to notify an arbitrary service (i.e.
 not necessarily a load balancer)?


 It's been described in the autoscaling wiki page for a while, and I
 thought the LBMember idea was discussed at the summit, but I wasn't
 there to verify that :)

 https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

 Basically, the LoadBalancerMember resource (which is very similar to the
 CinderVolumeAttachment) would be responsible for removing and adding IPs
 from/to the load balancer (which is actually a direct mapping to the way
 the various LB APIs work). Since this resource lives with the server
 resource inside the scaling unit, we don't really need to get anything
 _out_ of that stack, only pass _in_ the load balancer ID.


 I see a couple of problems with this approach:

 1) It makes the default case hard. There's no way to just specify a server
 and hook it up to a load balancer like you can at the moment. Instead, you
 _have_ to create a template (or template snippet - not really any better)
 to add this extra resource in, even for what should be the most basic,
 default case (scale servers behind a load balancer).


We can provide a standard resource/template for this, LoadBalancedServer,
to make the common case trivial and only require the user to pass
parameters, not a whole template.


 2) It relies on a plugin being present for any type of thing you might
 want to notify.


I don't understand this point. What do you mean by a plugin? I was assuming
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the
actual name) would become a standard Heat resource, not a third-party thing
(though third parties could provide their own through the usual heat
extension mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a
pretty standard design).



 At summit and - to the best of my recollection - before, we talked about
 scaling a generic group of resources and passing notifications to a generic
 controller, with the types of both defined by the user. I was expecting you
 to propose something based on webhooks, which is why I was surprised not to
 see anything about it in the API. (I'm not prejudging that that is the way
 to go... I'm actually wondering if Marconi has a role to play here.)


I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest
of our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind
notifications or how they would work to solve our problems. You can keep
saying that the notifications idea will solve our problems, but I can't
figure out how it would solve our problem unless someone actually explains
it :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Thomas Hervé
On Thu, Nov 21, 2013 at 12:18 PM, Zane Bitter zbit...@redhat.com wrote:
 On 20/11/13 23:49, Christopher Armstrong wrote:

 https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

 Basically, the LoadBalancerMember resource (which is very similar to the
 CinderVolumeAttachment) would be responsible for removing and adding IPs
 from/to the load balancer (which is actually a direct mapping to the way
 the various LB APIs work). Since this resource lives with the server
 resource inside the scaling unit, we don't really need to get anything
 _out_ of that stack, only pass _in_ the load balancer ID.


 I see a couple of problems with this approach:

 1) It makes the default case hard. There's no way to just specify a server
 and hook it up to a load balancer like you can at the moment. Instead, you
 _have_ to create a template (or template snippet - not really any better) to
 add this extra resource in, even for what should be the most basic, default
 case (scale servers behind a load balancer).

First, the design we had implied that we had a template all the time.
Now that changed, it does make things a bit harder than the
LoadBalancerNames list, but it's still fairly simple to me, and brings
a lot of flexibility.

Personally, my idea was to build a generic API, and then build helpers
on top of it to make common cases easier. It seems it's not a shared
view, but I don't see how we can do both at once.

 2) It relies on a plugin being present for any type of thing you might want
 to notify.

 At summit and - to the best of my recollection - before, we talked about
 scaling a generic group of resources and passing notifications to a generic
 controller, with the types of both defined by the user. I was expecting you
 to propose something based on webhooks, which is why I was surprised not to
 see anything about it in the API. (I'm not prejudging that that is the way
 to go... I'm actually wondering if Marconi has a role to play here.)

We definitely talked about notifications between resources. But,
putting it in the way of the autoscaling API would postpone things
quite a bit, whereas we don't really need it for the first phase. If
we use the member concept, we can provide a first integration step,
where the only missing thing would be rolling updates.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Zane Bitter

On 21/11/13 18:44, Christopher Armstrong wrote:


2) It relies on a plugin being present for any type of thing you
might want to notify.


I don't understand this point. What do you mean by a plugin? I was
assuming OS::Neutron::PoolMember (not LoadBalancerMember -- I went and
looked up the actual name) would become a standard Heat resource, not a
third-party thing (though third parties could provide their own through
the usual heat extension mechanisms).


I mean it requires a resource type plugin written in Python. So cloud 
operators could provide their own implementations, but ordinary users 
could not.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 12:31 PM, Zane Bitter zbit...@redhat.com wrote:

 On 21/11/13 18:44, Christopher Armstrong wrote:


 2) It relies on a plugin being present for any type of thing you
 might want to notify.


 I don't understand this point. What do you mean by a plugin? I was
 assuming OS::Neutron::PoolMember (not LoadBalancerMember -- I went and
 looked up the actual name) would become a standard Heat resource, not a
 third-party thing (though third parties could provide their own through
 the usual heat extension mechanisms).


 I mean it requires a resource type plugin written in Python. So cloud
 operators could provide their own implementations, but ordinary users could
 not.


Okay, but that sounds like a general problem to solve (custom third-party
plugins supplied by the user instead of cloud operators, which is an idea I
really love btw), and I don't see why it should be a point against the idea
of simply using a Neutron::PoolMember in a scaling unit.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Christopher Armstrong
On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:



[snip]



 It'd be interesting to see some examples, I think. I'll provide some
 examples of my proposals, with the following caveats:


 Excellent idea, thanks :)


  - I'm assuming a separation of launch configuration from scaling group,
 as you proposed -- I don't really have a problem with this.
 - I'm also writing these examples with the plural resources parameter,
 which there has been some bikeshedding around - I believe the structure
 can be the same whether we go with singular, plural, or even
 whole-template-as-a-string.

 # trivial example: scaling a single server

 POST /launch_configs

 {
  name: my-launch-config,
  resources: {
  my-server: {
  type: OS::Nova::Server,
  properties: {
  image: my-image,
  flavor: my-flavor, # etc...
  }
  }
  }
 }


 This case would be simpler with my proposal, assuming we allow a default:


  POST /launch_configs

  {
   name: my-launch-config,
   parameters: {

   image: my-image,
   flavor: my-flavor, # etc...
   }
  }

 If we don't allow a default it might be something more like:



  POST /launch_configs

  {
   name: my-launch-config,
   parameters: {

   image: my-image,
   flavor: my-flavor, # etc...
   },
   provider_template_uri: http://heat.example.com/
 tenant_id/resources_types/OS::Nova::Server/template

  }


  POST /groups

 {
  name: group-name,
  launch_config: my-launch-config,
  min_size: 0,
  max_size: 0,
 }


 This would be the same.



 (and then, the user would continue on to create a policy that scales the
 group, etc)

 # complex example: scaling a server with an attached volume

 POST /launch_configs

 {
  name: my-launch-config,
  resources: {
  my-volume: {
  type: OS::Cinder::Volume,
  properties: {
  # volume properties...
  }
  },
  my-server: {
  type: OS::Nova::Server,
  properties: {
  image: my-image,
  flavor: my-flavor, # etc...
  }
  },
  my-volume-attachment: {
  type: OS::Cinder::VolumeAttachment,
  properties: {
  volume_id: {get_resource: my-volume},
  instance_uuid: {get_resource: my-server},
  mountpoint: /mnt/volume
  }
  }
  }
 }


 This appears slightly more complex on the surface; I'll explain why in a
 second.


  POST /launch_configs

  {
   name: my-launch-config,
   parameters: {

   image: my-image,
   flavor: my-flavor, # etc...
   }
   provider_template: {
   hot_format_version: some random date,
   parameters {
   image_name: {
   type: string
   },
   flavor: {
   type: string
   } # c. ...

   },
   resources {
   my-volume: {
   type: OS::Cinder::Volume,
   properties: {
   # volume properties...
   }
   },
   my-server: {
   type: OS::Nova::Server,
   properties: {
   image: {get_param: image_name},
   flavor: {get_param: flavor}, # etc...

  }
   },
   my-volume-attachment: {
   type: OS::Cinder::VolumeAttachment,
   properties: {
   volume_id: {get_resource: my-volume},
   instance_uuid: {get_resource: my-server},
   mountpoint: /mnt/volume
   }
   }
   },
   outputs {
public_ip_address: {
Value: {get_attr: [my-server,
 public_ip_address]} # c. ...
   }
   }
  }

 (BTW the template could just as easily be included in the group rather
 than the launch config. If we put it here we can validate the parameters
 though.)

 There are a number of advantages to including the whole template, rather
 than a resource snippet:
  - Templates are versioned!
  - Templates accept parameters
  - Templates can provide outputs - we'll need these when we go to do
 notifications (e.g. to load balancers).

 The obvious downside is there's a lot of fiddly stuff to include in the
 template (hooking up the parameters and outputs), but this is almost
 entirely mitigated by the fact that the user can get a template, ready
 built with the server hooked up, from the API by hitting
 /resource_types/OS::Nova::Server/template and just edit in the Volume and
 VolumeAttachment. (For a different example, they could of course begin with
 a different 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Zane Bitter

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 19/11/13 19:14, Christopher Armstrong wrote:



[snip]




There are a number of advantages to including the whole template,
rather than a resource snippet:
  - Templates are versioned!
  - Templates accept parameters
  - Templates can provide outputs - we'll need these when we go to
do notifications (e.g. to load balancers).

The obvious downside is there's a lot of fiddly stuff to include in
the template (hooking up the parameters and outputs), but this is
almost entirely mitigated by the fact that the user can get a
template, ready built with the server hooked up, from the API by
hitting /resource_types/OS::Nova::__Server/template and just edit in
the Volume and VolumeAttachment. (For a different example, they
could of course begin with a different resource type - the launch
config accepts any keys for parameters.) To the extent that this
encourages people to write templates where the outputs are actually
supplied, it will help reduce the number of people complaining their
load balancers aren't forwarding any traffic because they didn't
surface the IP addresses.



My immediate reaction is to counter-propose just specifying an entire
template instead of parameters and template separately, but I think the


As an API, I think that would be fine, though inconsistent between the 
default (no template provided) and non-default cases. When it comes to 
implementing Heat resources to represent those, however, it would make 
the templates much less composable. If you wanted to reference anything 
from the surrounding template (including parameters), you would have to 
define the template inline and resolve references there. Whereas if you 
can pass parameters, then you only need to include the template from a 
separate file, or to reference a URL.



crux will be this point you mentioned:

  - Templates can provide outputs - we'll need these when we go to do
notifications (e.g. to load balancers).

Can you explain this in a bit more depth? It seems like whatever it is
may be the real deciding factor that means that your proposal can do
something that a resources or a template parameter can't do.  I


What I'm proposing _is_ a template parameter... I don't see any 
difference. A resources parameter couldn't do this though, because the 
resources section obviously doesn't contain outputs.


In any event, when we notify a Load Balancer, or _any_ type of thing 
that needs a notification, we need to pass it some data. At the moment, 
for load balancers, we pass the IDs of the servers (I originally thought 
we passed IP addresses directly, hence possibly misleading comments 
earlier). But our scaling unit is a template which may contain multiple 
servers, or no servers. And the thing that gets notified may not even be 
a load balancer. So there is no way to infer what the right data to send 
is, we will need the user to tell us. The outputs section of the 
template seems like a good mechanism to do it.



thought we had a workable solution with the LoadBalancerMember idea,
which you would use in a way somewhat similar to CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it 
handle the problem of wanting to notify an arbitrary service (i.e. not 
necessarily a load balancer)?


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Christopher Armstrong
On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter zbit...@redhat.com wrote:

 On 20/11/13 16:07, Christopher Armstrong wrote:

 On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:



 [snip]




 There are a number of advantages to including the whole template,
 rather than a resource snippet:
   - Templates are versioned!
   - Templates accept parameters
   - Templates can provide outputs - we'll need these when we go to
 do notifications (e.g. to load balancers).

 The obvious downside is there's a lot of fiddly stuff to include in
 the template (hooking up the parameters and outputs), but this is
 almost entirely mitigated by the fact that the user can get a
 template, ready built with the server hooked up, from the API by
 hitting /resource_types/OS::Nova::__Server/template and just edit in

 the Volume and VolumeAttachment. (For a different example, they
 could of course begin with a different resource type - the launch
 config accepts any keys for parameters.) To the extent that this
 encourages people to write templates where the outputs are actually
 supplied, it will help reduce the number of people complaining their
 load balancers aren't forwarding any traffic because they didn't
 surface the IP addresses.



 My immediate reaction is to counter-propose just specifying an entire
 template instead of parameters and template separately, but I think the


 As an API, I think that would be fine, though inconsistent between the
 default (no template provided) and non-default cases. When it comes to
 implementing Heat resources to represent those, however, it would make the
 templates much less composable. If you wanted to reference anything from
 the surrounding template (including parameters), you would have to define
 the template inline and resolve references there. Whereas if you can pass
 parameters, then you only need to include the template from a separate
 file, or to reference a URL.


Yeah, that's a good point, but I could also imagine if you're *not*
actually trying to dynamically parameterize the flavor and image in the
above example, you wouldn't need to use parameters at all, so the example
could get a bit shorter.

(to diverge from the topic momentarily) I've been getting a little bit
concerned about how we'll deal with templates-within-templates... It seems
a *bit* unfortunate that users will be forced to use separate files for
their scaled and outer templates, instead of having the option to specify
them inline, but I can't think of a very satisfying way to solve that
problem. Maybe an escape function that prevents heat from evaluating any
of the function calls inside?



  crux will be this point you mentioned:

   - Templates can provide outputs - we'll need these when we go to do
 notifications (e.g. to load balancers).

 Can you explain this in a bit more depth? It seems like whatever it is
 may be the real deciding factor that means that your proposal can do
 something that a resources or a template parameter can't do.  I


 What I'm proposing _is_ a template parameter... I don't see any
 difference. A resources parameter couldn't do this though, because the
 resources section obviously doesn't contain outputs.

 In any event, when we notify a Load Balancer, or _any_ type of thing that
 needs a notification, we need to pass it some data. At the moment, for load
 balancers, we pass the IDs of the servers (I originally thought we passed
 IP addresses directly, hence possibly misleading comments earlier). But our
 scaling unit is a template which may contain multiple servers, or no
 servers. And the thing that gets notified may not even be a load balancer.
 So there is no way to infer what the right data to send is, we will need
 the user to tell us. The outputs section of the template seems like a good
 mechanism to do it.


Hmm, okay. I still don't think I understand entirely how you expect outputs
to be used, especially in context of the AS API. Can you give an example of
how they would actually be used? I guess I don't yet understand all the
implications of notification -- is that a new idea for icehouse?

For what it's worth, I'm coming around to the idea of specifying the whole
template in the API (or as a URI), but I'd still like to make sure I have a
really good idea of the benefits it grants to justify the extra verbosity.


  thought we had a workable solution with the LoadBalancerMember idea,
 which you would use in a way somewhat similar to CinderVolumeAttachment
 in the above example, to hook servers up to load balancers.


 I haven't seen this proposal at all. Do you have a link? How does it
 handle the problem of wanting to notify an arbitrary service (i.e. not
 necessarily a load balancer)?


It's been described in the autoscaling wiki page for a while, and I thought
the LBMember idea was discussed at 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-19 Thread Christopher Armstrong
On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter zbit...@redhat.com wrote:

 On 16/11/13 11:15, Angus Salkeld wrote:

 On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

 On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter zbit...@redhat.com wrote:

  On 15/11/13 02:48, Christopher Armstrong wrote:

  On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com
 mailto:asalk...@redhat.com wrote:

 On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

 http://docs.heatautoscale.__apiary.io/

 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for
 autoscaling.
 It's written in API-Blueprint format (which is a simple subset
 of Markdown)
 and provides schemas for inputs and outputs using JSON-Schema.
 The source
 document is currently at
 https://github.com/radix/heat/__raw/as-api-spike/
 autoscaling.__apibp


 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them
 from the
 token?
 - how webhooks are done (though this shouldn't affect the API
 too much;
 they're basically just opaque)

 Please read and comment :)


 Hi Chistopher

 In the group create object you have 'resources'.
 Can you explain what you expect in there? I thought we talked at
 summit about have a unit of scaling as a nested stack.

 The thinking here was:
 - this makes the new config stuff easier to scale (config get
 applied
 Â  per scaling stack)

 - you can potentially place notification resources in the scaling
 Â  stack (think marconi message resource - on-create it sends a
 Â  message)

 - no need for a launchconfig
 - you can place a LoadbalancerMember resource in the scaling stack
 Â  that triggers the loadbalancer to add/remove it from the lb.


 I guess what I am saying is I'd expect an api to a nested stack.


 Well, what I'm thinking now is that instead of resources (a
 mapping of
 resources), just have resource, which can be the template definition
 for a single resource. This would then allow the user to specify a
 Stack
 resource if they want to provide multiple resources. How does that
 sound?


 My thought was this (digging into the implementation here a bit):

 - Basically, the autoscaling code works as it does now: creates a
 template
 containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
 with the properties obtained from the LaunchConfig, and creates a
 stack in
 Heat.
 - LaunchConfig can now contain any properties you like (I'm not 100%
 sure
 about this one*).
 - The user optionally supplies a template. If the template is
 supplied, it
 is passed to Heat and set in the environment as the provider for the
 OS::Nova::Server resource.


  I don't like the idea of binding to OS::Nova::Server specifically for
 autoscaling. I'd rather have the ability to scale *any* resource,
 including
 nested stacks or custom resources. It seems like jumping through hoops to


 big +1 here, autoscaling should not even know what it is scaling, just
 some resource. solum might want to scale all sorts of non-server
 resources (and other users).


 I'm surprised by the negative reaction to what I suggested, which is a
 completely standard use of provider templates. Allowing a user-defined
 stack of resources to stand in for an unrelated resource type is the entire
 point of providers. Everyone says that it's a great feature, but if you try
 to use it for something they call it a hack. Strange.


To clarify this position (which I already did in IRC), replacing one
concrete resource with another that means something in a completely
different domain is a hack -- say, replacing server with group of
related resources. However, replacing OS::Nova::Server with something
which still does something very much like creating a server is reasonable
-- e.g., using a different API like one for creating containers or using a
different cloud provider's API.



 So, allow me to make a slight modification to my proposal:

 - The autoscaling service manages a template containing
 OS::Heat::ScaledResource resources. This is an imaginary resource type that
 is not backed by a plugin in Heat.
 - If no template is supplied by the user, the environment declares another
 resource plugin as the provider for OS::Heat::ScaledResource (by default it
 would be OS::Nova::Server, but this should probably be configurable by the
 deployer... so if you had a region full of Docker containers and no Nova
 servers, you could set it to OS::Docker::Container or something).
 - If a provider template is supplied by the user, it would be specified as
 the provider in the environment file.

 This, I hope, demonstrates that autoscaling needs no knowledge whatsoever
 about what it is scaling to use this 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-19 Thread Steven Dake


On 11/17/2013 01:57 PM, Steve Baker wrote:

On 11/15/2013 05:19 AM, Christopher Armstrong wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for
autoscaling. It's written in API-Blueprint format (which is a simple
subset of Markdown) and provides schemas for inputs and outputs using
JSON-Schema. The source document is currently
at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Apologies if I'm about to re-litigate an old argument, but...

At summit we discussed creating a new endpoint (and new pythonclient)
for autoscaling. Instead I think the autoscaling API could just be added
to the existing heat-api endpoint.

Arguments for just making auto scaling part of heat api include:
* Significantly less development, packaging and deployment configuration
of not creating a heat-autoscaling-api and python-autoscalingclient
* Autoscaling is orchestration (for some definition of orchestration) so
belongs in the orchestration service endpoint
* The autoscaling API includes heat template snippets, so a heat service
is a required dependency for deployers anyway
* End-users are still free to use the autoscaling portion of the heat
API without necessarily being aware of (or directly using) heat
templates and stacks
* It seems acceptable for single endpoints to manage many resources (eg,
the increasingly disparate list of resources available via the neutron API)

Arguments for making a new auto scaling api include:
* Autoscaling is not orchestration (for some narrower definition of
orchestration)
* Autoscaling implementation will be handled by something other than
heat engine (I have assumed the opposite)
(no doubt this list will be added to in this thread)
A separate process can be autoscaled independently of heat-api which is 
a big plus architecturally.


They really do different things, and separating their concerns at the 
process level is a good goal.


I prefer a separate process for these reasons.

Regards
-steve


What do you think?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-19 Thread Zane Bitter

On 19/11/13 19:14, Christopher Armstrong wrote:

On Mon, Nov 18, 2013 at 5:57 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 16/11/13 11:15, Angus Salkeld wrote:

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter
zbit...@redhat.com mailto:zbit...@redhat.com wrote:

On 15/11/13 02:48, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld
asalk...@redhat.com mailto:asalk...@redhat.com
mailto:asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

 On 14/11/13 10:19 -0600, Christopher Armstrong
wrote:

http://docs.heatautoscale.__ap__iary.io/
http://apiary.io/

 http://docs.heatautoscale.__apiary.io/
http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the
proposed API for
 autoscaling.
 It's written in API-Blueprint format (which
is a simple subset
 of Markdown)
 and provides schemas for inputs and outputs
using JSON-Schema.
 The source
 document is currently at
https://github.com/radix/heat/raw/as-api-spike/
https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp



https://github.com/radix/__heat/raw/as-api-spike/__autoscaling.apibp

https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 


 Things we still need to figure out:

 - how to scope projects/domains. put them
in the URL? get them
 from the
 token?
 - how webhooks are done (though this
shouldn't affect the API
 too much;
 they're basically just opaque)

 Please read and comment :)


 Hi Chistopher

 In the group create object you have 'resources'.
 Can you explain what you expect in there? I
thought we talked at
 summit about have a unit of scaling as a nested
stack.

 The thinking here was:
 - this makes the new config stuff easier to
scale (config get
applied
 Â  per scaling stack)

 - you can potentially place notification
resources in the scaling
 Â  stack (think marconi message resource -
on-create it sends a
 Â  message)

 - no need for a launchconfig
 - you can place a LoadbalancerMember resource
in the scaling stack
 Â  that triggers the loadbalancer to add/remove
it from the lb.


 I guess what I am saying is I'd expect an api
to a nested stack.


Well, what I'm thinking now is that instead of
resources (a
mapping of
resources), just have resource, which can be the
template definition
for a single resource. This would then allow the
user to specify a
Stack
resource if they want to provide multiple resources.
How does that
sound?


My thought was this (digging into the implementation
here a bit):

- Basically, the autoscaling code works as it does now:
creates a
template
containing OS::Nova::Server resources (changed from
AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and
creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like
(I'm not 100%
sure
about this one*).
- The user optionally supplies a template. If the
template is
supplied, it
is passed to Heat and set in the environment as the
provider for the
  

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Steve Baker
On 11/18/2013 01:03 PM, Christopher Armstrong wrote:
 On Sun, Nov 17, 2013 at 2:57 PM, Steve Baker sba...@redhat.com
 mailto:sba...@redhat.com wrote:

 On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
  http://docs.heatautoscale.apiary.io/
 
  I've thrown together a rough sketch of the proposed API for
  autoscaling. It's written in API-Blueprint format (which is a simple
  subset of Markdown) and provides schemas for inputs and outputs
 using
  JSON-Schema. The source document is currently
  at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 
 Apologies if I'm about to re-litigate an old argument, but...

 At summit we discussed creating a new endpoint (and new pythonclient)
 for autoscaling. Instead I think the autoscaling API could just be
 added
 to the existing heat-api endpoint.

 Arguments for just making auto scaling part of heat api include:
 * Significantly less development, packaging and deployment
 configuration
 of not creating a heat-autoscaling-api and python-autoscalingclient
 * Autoscaling is orchestration (for some definition of
 orchestration) so
 belongs in the orchestration service endpoint
 * The autoscaling API includes heat template snippets, so a heat
 service
 is a required dependency for deployers anyway
 * End-users are still free to use the autoscaling portion of the heat
 API without necessarily being aware of (or directly using) heat
 templates and stacks
 * It seems acceptable for single endpoints to manage many
 resources (eg,
 the increasingly disparate list of resources available via the
 neutron API)

 Arguments for making a new auto scaling api include:
 * Autoscaling is not orchestration (for some narrower definition of
 orchestration)
 * Autoscaling implementation will be handled by something other than
 heat engine (I have assumed the opposite)
 (no doubt this list will be added to in this thread)

 What do you think?


 I would be fine with this. Putting the API at the same endpoint as
 Heat's API can be done whether we decide to document the API as a
 separate thing or not. Would you prefer to see it as literally just
 more features added to the Heat API, or an autoscaling API that just
 happens to live at the same endpoint?
I have no preference here. It is currently mostly inside /groups/
anyway, this seems like a reasonable base path.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Zane Bitter

On 17/11/13 21:57, Steve Baker wrote:

On 11/15/2013 05:19 AM, Christopher Armstrong wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for
autoscaling. It's written in API-Blueprint format (which is a simple
subset of Markdown) and provides schemas for inputs and outputs using
JSON-Schema. The source document is currently
at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Apologies if I'm about to re-litigate an old argument, but...

At summit we discussed creating a new endpoint (and new pythonclient)
for autoscaling. Instead I think the autoscaling API could just be added
to the existing heat-api endpoint.


-1


Arguments for just making auto scaling part of heat api include:
* Significantly less development, packaging and deployment configuration
of not creating a heat-autoscaling-api and python-autoscalingclient


Having a separate endpoint does not necessarily mean creating 
heat-autoscaling-api. We can have two endpoints in the keystone catalog 
pointing to the same API process. I always imagined that this would be 
the first step.


It doesn't necessarily require a python-scalingclient either, although I 
would lean toward having one.



* Autoscaling is orchestration (for some definition of orchestration) so
belongs in the orchestration service endpoint
* The autoscaling API includes heat template snippets, so a heat service
is a required dependency for deployers anyway
* End-users are still free to use the autoscaling portion of the heat
API without necessarily being aware of (or directly using) heat
templates and stacks
* It seems acceptable for single endpoints to manage many resources (eg,
the increasingly disparate list of resources available via the neutron API)

Arguments for making a new auto scaling api include:
* Autoscaling is not orchestration (for some narrower definition of
orchestration)
* Autoscaling implementation will be handled by something other than
heat engine (I have assumed the opposite)
(no doubt this list will be added to in this thread)

What do you think?


I support a separate endpoint because it gives us more options in the 
future. We may well reach a point where we decide that autoscaling 
belongs in a separate project (not program), but that option is 
foreclosed to us if we combine it in the same endpoint. Personally I 
think it would be great if we could eventually reduce the coupling 
between autoscaling and Heat to the point where that would be possible.


IMO we should also be giving providers the flexibility to deploy only 
autoscaling publicly, and only deploy Heat for internal access (i.e. by 
services like autoscaling, Trove, Savanna, c.)


In short, we live in an uncertain world and more options for the future 
beats fewer options in the future. The cost of keeping these options 
open does not appear high to me.


cheers,
Zane.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Zane Bitter

On 16/11/13 11:11, Angus Salkeld wrote:

On 15/11/13 16:32 +0100, Zane Bitter wrote:

On 14/11/13 19:53, Christopher Armstrong wrote:

I'm a little unclear as to what point you're making here. Right now, the
launch configuration is specified in the scaling group by the
resources property of the request json body. It's not a full template,
but just a snippet of a set of resources you want scaled.


Right, and this has a couple of effects, particularly for Heat:
1) You can't share a single launch config between scaling groups -
this hurts composability of templates.
2) The AWS::EC2::Launch config wouldn't correspond to a real API, so
we would have to continue to implement it using the current hack.


IMHO we should not let the design be altered by aws resources.
- let lauchconfing be ugly.


So, if our design was clearly better I probably would agree with you. 
But having launchconfig as a thing separate from a scaling group means:


* You can share one launch config between multiple scaling groups - it's 
more composable.
* You can delete a scaling group and then recreate it again without 
respecifying all of the launchconfig parameters (which are the finicky 
part).
* The CLI interface can be much simpler, since you wouldn't have to 
supply all the configuration for the scaling group and the server 
properties at the same time.


So I would would want this whether or not it enabled us to eliminate a 
bunch of magic/hackery/technical debt from our AWS-compatible resource 
plugins. It does though, which is even more reason to do it.



- make the primary interface of a scaling unit be a nested stack (with
   our new config resources etc..)


So, I don't think we disagree in concept here, only about the 
implementation...


Surely, though, we don't want to require the user to provide a Heat 
template for every operation? One of the goals of a separate autoscaling 
API is that you shouldn't need to write Heat templates to use 
autoscaling in the simplest case (though, of course, it's completely 
appropriate to require writing templates expose more powerful features).


That could still be achieved by having a default template that just 
contains an OS::Nova::Server, but now we're back to just disagreeing 
about implementation details.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Zane Bitter

On 16/11/13 11:15, Angus Salkeld wrote:

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter zbit...@redhat.com wrote:


On 15/11/13 02:48, Christopher Armstrong wrote:


On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

http://docs.heatautoscale.__apiary.io/

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for
autoscaling.
It's written in API-Blueprint format (which is a simple subset
of Markdown)
and provides schemas for inputs and outputs using JSON-Schema.
The source
document is currently at
https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp


https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp



Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them
from the
token?
- how webhooks are done (though this shouldn't affect the API
too much;
they're basically just opaque)

Please read and comment :)


Hi Chistopher

In the group create object you have 'resources'.
Can you explain what you expect in there? I thought we talked at
summit about have a unit of scaling as a nested stack.

The thinking here was:
- this makes the new config stuff easier to scale (config get
applied
  per scaling stack)

- you can potentially place notification resources in the scaling
  stack (think marconi message resource - on-create it sends a
  message)

- no need for a launchconfig
- you can place a LoadbalancerMember resource in the scaling stack
  that triggers the loadbalancer to add/remove it from the lb.


I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of resources (a
mapping of
resources), just have resource, which can be the template definition
for a single resource. This would then allow the user to specify a
Stack
resource if they want to provide multiple resources. How does that
sound?



My thought was this (digging into the implementation here a bit):

- Basically, the autoscaling code works as it does now: creates a
template
containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like (I'm not 100%
sure
about this one*).
- The user optionally supplies a template. If the template is
supplied, it
is passed to Heat and set in the environment as the provider for the
OS::Nova::Server resource.



I don't like the idea of binding to OS::Nova::Server specifically for
autoscaling. I'd rather have the ability to scale *any* resource,
including
nested stacks or custom resources. It seems like jumping through hoops to


big +1 here, autoscaling should not even know what it is scaling, just
some resource. solum might want to scale all sorts of non-server
resources (and other users).


I'm surprised by the negative reaction to what I suggested, which is a 
completely standard use of provider templates. Allowing a user-defined 
stack of resources to stand in for an unrelated resource type is the 
entire point of providers. Everyone says that it's a great feature, but 
if you try to use it for something they call it a hack. Strange.


So, allow me to make a slight modification to my proposal:

- The autoscaling service manages a template containing 
OS::Heat::ScaledResource resources. This is an imaginary resource type 
that is not backed by a plugin in Heat.
- If no template is supplied by the user, the environment declares 
another resource plugin as the provider for OS::Heat::ScaledResource (by 
default it would be OS::Nova::Server, but this should probably be 
configurable by the deployer... so if you had a region full of Docker 
containers and no Nova servers, you could set it to 
OS::Docker::Container or something).
- If a provider template is supplied by the user, it would be specified 
as the provider in the environment file.


This, I hope, demonstrates that autoscaling needs no knowledge 
whatsoever about what it is scaling to use this approach.


The only way that it would require some knowledge is if we restricted 
the properties that can be passed to the launch config to match some 
particular interface, but I believe we already have a consensus that we 
don't want to do that.



This assumes that we need a default resource type, though it would be 
substantially unchanged if we didn't have a default resource type (we'd 
just make supplying the template mandatory). In my reply to you other 
post I put forward an argument why I don't think that we should have no 
default. If your objection is that the default is of a different 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Zane Bitter

On 15/11/13 21:06, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 11/14/2013 12:56:22 PM:

  ...
  My 2c: the way I designed the Heat API was such that extant stacks can
  be addressed uniquely by name. Humans are pretty good with names, not so
  much with 128 bit numbers. The consequences of this for the design were:
- names must be unique per-tenant
- the tenant-id appears in the endpoint URL
 
  However, the rest of OpenStack seems to have gone in a direction where
  the name is really just a comment field, everything is addressed only
  by UUID. A consequence of this is that it renders the tenant-id in the
  URL pointless, so many projects are removing it.
 
  Unfortunately, one result is that if you create a resource and e.g. miss
  the Created response for any reason and thus do not have the UUID, there
  is now no safe, general automated way to delete it again. (There are
  obviously heuristics you could try.) To solve this problem, there is a
  proposal floating about for clients to provide another unique ID when
  making the request, which would render a retry of the request
  idempotent. That's insufficient, though, because if you decide to roll
  back instead of retry you still need a way to delete using only this ID.
 
  So basically, that design sucks for both humans (who have to remember
  UUIDs instead of names) and machines (Heat). However, it appears that I
  am in a minority of one on this point, so take it with a grain of salt.

I have been thinking about this too.  I tried to convince my group that
we should give up on assigning UUIDs in our system, and rather make it
the client's problem to assign the unique ID of what corresponds to a
Heat stack.  Just use one unique ID, supplied by the client.  Simple,
clean, and it hurts most peoples' heads.  Biggest concern was: how are
the clients going to be sure they do not mess up?  That does not seem
tough to me.  However, there is a less demanding approach.  Introduce an
operation in the API that allocates the stack's unique ID.  It does
nothing else for a stack, just returns the unique ID.  If the reply
makes it back into the client's persistent store, all is well.  If not,
the only thing that has been wasted is an ID; an unused ID can be reaped
after a satisfyingly long period of time --- and if even that was too
soon then the problem is easily detected and recovered from.


There was some discussion of this in different context here:

http://lists.openstack.org/pipermail/openstack-dev/2013-November/019316.html

What you're suggesting is not a terrible idea in general, but 
implementing it properly would be an even bigger departure from the way 
OpenStack does things than some other ideas that are already in the 
too-hard basket for departing too far from the way OpenStack does 
things. So I don't think it will work for this project.



  ... webhooks ...

So if we want to do this right, it has to go something like the
following, right?  The client has to create a trust for the thing that
is going to invoke the webhook; using that, the webhook invocation can
be properly authorized.


Yes, exactly. This was touched on only briefly in the thread[1], but 
IIRC there was some follow-up to this effect on IRC that you probably 
missed.


cheers,
Zane.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019313.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-18 Thread Angus Salkeld

On 18/11/13 12:57 +0100, Zane Bitter wrote:

On 16/11/13 11:15, Angus Salkeld wrote:

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter zbit...@redhat.com wrote:


On 15/11/13 02:48, Christopher Armstrong wrote:


On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

   On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

   http://docs.heatautoscale.__apiary.io/

   http://docs.heatautoscale.apiary.io/

   I've thrown together a rough sketch of the proposed API for
   autoscaling.
   It's written in API-Blueprint format (which is a simple subset
   of Markdown)
   and provides schemas for inputs and outputs using JSON-Schema.
   The source
   document is currently at
   https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp


https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp





   Things we still need to figure out:

   - how to scope projects/domains. put them in the URL? get them
   from the
   token?
   - how webhooks are done (though this shouldn't affect the API
   too much;
   they're basically just opaque)

   Please read and comment :)


   Hi Chistopher

   In the group create object you have 'resources'.
   Can you explain what you expect in there? I thought we talked at
   summit about have a unit of scaling as a nested stack.

   The thinking here was:
   - this makes the new config stuff easier to scale (config get
applied
   Â  per scaling stack)

   - you can potentially place notification resources in the scaling
   Â  stack (think marconi message resource - on-create it sends a
   Â  message)

   - no need for a launchconfig
   - you can place a LoadbalancerMember resource in the scaling stack
   Â  that triggers the loadbalancer to add/remove it from the lb.


   I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of resources (a
mapping of
resources), just have resource, which can be the template definition
for a single resource. This would then allow the user to specify a
Stack
resource if they want to provide multiple resources. How does that
sound?



My thought was this (digging into the implementation here a bit):

- Basically, the autoscaling code works as it does now: creates a
template
containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and creates a
stack in
Heat.
- LaunchConfig can now contain any properties you like (I'm not 100%
sure
about this one*).
- The user optionally supplies a template. If the template is
supplied, it
is passed to Heat and set in the environment as the provider for the
OS::Nova::Server resource.



I don't like the idea of binding to OS::Nova::Server specifically for
autoscaling. I'd rather have the ability to scale *any* resource,
including
nested stacks or custom resources. It seems like jumping through hoops to


big +1 here, autoscaling should not even know what it is scaling, just
some resource. solum might want to scale all sorts of non-server
resources (and other users).


I'm surprised by the negative reaction to what I suggested, which is 
a completely standard use of provider templates. Allowing a 
user-defined stack of resources to stand in for an unrelated resource 
type is the entire point of providers. Everyone says that it's a 
great feature, but if you try to use it for something they call it a 
hack. Strange.


I am not against templateResources.



So, allow me to make a slight modification to my proposal:

- The autoscaling service manages a template containing 
OS::Heat::ScaledResource resources. This is an imaginary resource 
type that is not backed by a plugin in Heat.


Just an interface/property definition really?

- If no template is supplied by the user, the environment declares 
another resource plugin as the provider for OS::Heat::ScaledResource 
(by default it would be OS::Nova::Server, but this should probably be 
configurable by the deployer... so if you had a region full of Docker 
containers and no Nova servers, you could set it to 
OS::Docker::Container or something).
- If a provider template is supplied by the user, it would be 
specified as the provider in the environment file.


This, I hope, demonstrates that autoscaling needs no knowledge 
whatsoever about what it is scaling to use this approach.


The only way that it would require some knowledge is if we restricted 
the properties that can be passed to the launch config to match some 
particular interface, but I believe we already have a consensus that 
we don't want to do that.



This assumes that we need a default resource type, though it would be 
substantially unchanged if we didn't have a default resource type 
(we'd just make supplying the template mandatory). In my reply to you 
other post I put forward an argument why I 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-17 Thread Steve Baker
On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for
 autoscaling. It's written in API-Blueprint format (which is a simple
 subset of Markdown) and provides schemas for inputs and outputs using
 JSON-Schema. The source document is currently
 at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them from
 the token?
 - how webhooks are done (though this shouldn't affect the API too
 much; they're basically just opaque)

 Please read and comment :)

Looking at the scaling policy I see

|change: {
  type: integer,
  description: a number that has an effect based on change_type.},
change_type: {
  type: string,
  enum: [change_in_capacity,
   percentage_change_in_capacity,
   exact_capacity],
  description: describes the way that 'change' will apply to the active 
capacity of the scaling group},||

|

There could be an issue with percentage_change_in_capacity whenever that
evaluates to needing to scale by between zero and one resources. I
thought that maybe the percentage_change_in_capacity option should be
dropped, but it might be enough to always round up any non-zero capacity
change.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-17 Thread Steve Baker
On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for
 autoscaling. It's written in API-Blueprint format (which is a simple
 subset of Markdown) and provides schemas for inputs and outputs using
 JSON-Schema. The source document is currently
 at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp

Apologies if I'm about to re-litigate an old argument, but...

At summit we discussed creating a new endpoint (and new pythonclient)
for autoscaling. Instead I think the autoscaling API could just be added
to the existing heat-api endpoint.

Arguments for just making auto scaling part of heat api include:
* Significantly less development, packaging and deployment configuration
of not creating a heat-autoscaling-api and python-autoscalingclient
* Autoscaling is orchestration (for some definition of orchestration) so
belongs in the orchestration service endpoint
* The autoscaling API includes heat template snippets, so a heat service
is a required dependency for deployers anyway
* End-users are still free to use the autoscaling portion of the heat
API without necessarily being aware of (or directly using) heat
templates and stacks
* It seems acceptable for single endpoints to manage many resources (eg,
the increasingly disparate list of resources available via the neutron API)

Arguments for making a new auto scaling api include:
* Autoscaling is not orchestration (for some narrower definition of
orchestration)
* Autoscaling implementation will be handled by something other than
heat engine (I have assumed the opposite)
(no doubt this list will be added to in this thread)

What do you think?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-17 Thread Christopher Armstrong
On Sun, Nov 17, 2013 at 2:57 PM, Steve Baker sba...@redhat.com wrote:

 On 11/15/2013 05:19 AM, Christopher Armstrong wrote:
  http://docs.heatautoscale.apiary.io/
 
  I've thrown together a rough sketch of the proposed API for
  autoscaling. It's written in API-Blueprint format (which is a simple
  subset of Markdown) and provides schemas for inputs and outputs using
  JSON-Schema. The source document is currently
  at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp
 
 Apologies if I'm about to re-litigate an old argument, but...

 At summit we discussed creating a new endpoint (and new pythonclient)
 for autoscaling. Instead I think the autoscaling API could just be added
 to the existing heat-api endpoint.

 Arguments for just making auto scaling part of heat api include:
 * Significantly less development, packaging and deployment configuration
 of not creating a heat-autoscaling-api and python-autoscalingclient
 * Autoscaling is orchestration (for some definition of orchestration) so
 belongs in the orchestration service endpoint
 * The autoscaling API includes heat template snippets, so a heat service
 is a required dependency for deployers anyway
 * End-users are still free to use the autoscaling portion of the heat
 API without necessarily being aware of (or directly using) heat
 templates and stacks
 * It seems acceptable for single endpoints to manage many resources (eg,
 the increasingly disparate list of resources available via the neutron API)

 Arguments for making a new auto scaling api include:
 * Autoscaling is not orchestration (for some narrower definition of
 orchestration)
 * Autoscaling implementation will be handled by something other than
 heat engine (I have assumed the opposite)
 (no doubt this list will be added to in this thread)

 What do you think?


I would be fine with this. Putting the API at the same endpoint as Heat's
API can be done whether we decide to document the API as a separate thing
or not. Would you prefer to see it as literally just more features added to
the Heat API, or an autoscaling API that just happens to live at the same
endpoint?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-16 Thread Angus Salkeld

On 15/11/13 16:32 +0100, Zane Bitter wrote:

On 14/11/13 19:53, Christopher Armstrong wrote:

Thanks for the comments, Zane.


On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:
   A few comments...

   #1 thing is that the launch configuration needs to be somehow
   represented. In general we want the launch configuration to be a
   provider template, but we'll want to create a shortcut for the
   obvious case of just scaling servers. Maybe we pass a provider
   template (or URL) as well as parameters, and the former is optional.


I'm a little unclear as to what point you're making here. Right now, the
launch configuration is specified in the scaling group by the
resources property of the request json body. It's not a full template,
but just a snippet of a set of resources you want scaled.


Right, and this has a couple of effects, particularly for Heat:
1) You can't share a single launch config between scaling groups - 
this hurts composability of templates.
2) The AWS::EC2::Launch config wouldn't correspond to a real API, so 
we would have to continue to implement it using the current hack.


IMHO we should not let the design be altered by aws resources.
- let lauchconfing be ugly.
- make the primary interface of a scaling unit be a nested stack (with
  our new config resources etc..)



Fixing (2) is one of my top two reasons for even having an autoscaling API.


As an aside, maybe we should replace this with a singlular resource
and allow people to use a Stack resource if they want to represent
multiple resources.

I guess we can have a simpler API for using an old-style,
server-specific launch configuration, but I am skeptical of the
benefit, since specifying a single Instance resource is pretty simple.


See my other message for implementation suggestion.


   I'm not sure I understand the webhooks part... webhook-exec is the
   thing that e.g. Ceilometer will use to signal an alarm, right? Why
   is it not called something like
   /groups/{group_id}/policies/{__policy_id}/alarm ? (Maybe because it
   requires different auth middleware? Or does it?)


Mostly because it's unnecessary. The idea was to generate a secret,
opaque, revokable ID that maps to the specific policy.
Â


Seems like it would be nice to look at the webhook URL and be able to 
figure out what it's for. I disagree that a secret URL is sufficient 
here, but even if it were it could be something like:


/groups/{group_id}/policies/{policy_name}/alarm/{secret_code}



   And the other ones are setting up the notification actions? Can we
   call them notifications instead of webhooks? (After all, in the
   future we will probably want to add Marconi support, and maybe even
   Mistral support.) And why are these attached to the policy? Isn't
   the notification connected to changes in the group, rather than
   anything specific to the policy? Am I misunderstanding how this
   works? What is the difference between 'uri' and 'capability_uri'?



Policies represent ways to change a group (add +5% to this group).
Webhooks execute policies.

A capability URI is a URI which represents a capability to do
something all by itself. capability_uri would be the webhook-exec thing.
The regular URI would be the thing under
/groups/{group_id}/policies/{policy_id}/webhooks. That URI needs to
exist so you can perform the DELETE operation on it. (but you can't
DELETE the capability_uri, only POST to it to execute the policy).
Â


Oh, I was misunderstanding... this doesn't set up the notifications, 
it allows you to create and revoke multiple webhook URLs for the 
alarms.


I have reservations about this whole area.


I'll think more about webhooks vs notifications.


Seems like a way to configure the notifications is missing altogether.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-16 Thread Angus Salkeld

On 15/11/13 08:46 -0600, Christopher Armstrong wrote:

On Fri, Nov 15, 2013 at 3:57 AM, Zane Bitter zbit...@redhat.com wrote:


On 15/11/13 02:48, Christopher Armstrong wrote:


On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

http://docs.heatautoscale.__apiary.io/

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for
autoscaling.
It's written in API-Blueprint format (which is a simple subset
of Markdown)
and provides schemas for inputs and outputs using JSON-Schema.
The source
document is currently at
https://github.com/radix/heat/__raw/as-api-spike/
autoscaling.__apibp

https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp



Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them
from the
token?
- how webhooks are done (though this shouldn't affect the API
too much;
they're basically just opaque)

Please read and comment :)


Hi Chistopher

In the group create object you have 'resources'.
Can you explain what you expect in there? I thought we talked at
summit about have a unit of scaling as a nested stack.

The thinking here was:
- this makes the new config stuff easier to scale (config get applied
  per scaling stack)

- you can potentially place notification resources in the scaling
  stack (think marconi message resource - on-create it sends a
  message)

- no need for a launchconfig
- you can place a LoadbalancerMember resource in the scaling stack
  that triggers the loadbalancer to add/remove it from the lb.


I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of resources (a mapping of
resources), just have resource, which can be the template definition
for a single resource. This would then allow the user to specify a Stack
resource if they want to provide multiple resources. How does that sound?



My thought was this (digging into the implementation here a bit):

- Basically, the autoscaling code works as it does now: creates a template
containing OS::Nova::Server resources (changed from AWS::EC2::Instance),
with the properties obtained from the LaunchConfig, and creates a stack in
Heat.
- LaunchConfig can now contain any properties you like (I'm not 100% sure
about this one*).
- The user optionally supplies a template. If the template is supplied, it
is passed to Heat and set in the environment as the provider for the
OS::Nova::Server resource.



I don't like the idea of binding to OS::Nova::Server specifically for
autoscaling. I'd rather have the ability to scale *any* resource, including
nested stacks or custom resources. It seems like jumping through hoops to


big +1 here, autoscaling should not even know what it is scaling, just
some resource. solum might want to scale all sorts of non-server
resources (and other users).


support custom resources by overriding OS::Nova::Server instead of just
allowing users to specify the resource that they really want directly.

How about we offer two types of configuration, one which supports
arbitrary resources and one which supports OS::Nova::Server-specific launch
configurations? We could just add a type=server / type=resource
parameter which specifies which type of scaling unit to use.



How about just one nested-stack.
Keep it simple.





This should require no substantive changes to the code since it uses
existing abstractions, it makes the common case the default, and it avoids
the overhead of nested stacks in the default case.


-1



cheers,
Zane.

* One thing the existing LaunchConfig does is steer you in the direction
of not doing things that won't work - e.g. you can't specify a volume to
attach to the server, because you can't attach a single boot volume to
multiple servers. The way to do that correctly will be to include the
volume in the provider template. So maybe we should define a set of allowed
properties for the LaunchConfig, and make people hard-code anything else
they want to do in the provider template, just to make it harder to do
wrong things. I'm worried that would make composition in general harder
though.



If we offer a type=server then the launch configuration can be restricted
to things that can automatically be scaled. I think if users want more
interesting scaling units they should use resources and specify both a
server and a volume as heat resources.

--
Christopher Armstrong
http://radix.twistedmatrix.com/
http://planet-if.com/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Zane Bitter

On 15/11/13 02:48, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com
mailto:asalk...@redhat.com wrote:

On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

http://docs.heatautoscale.__apiary.io/
http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for
autoscaling.
It's written in API-Blueprint format (which is a simple subset
of Markdown)
and provides schemas for inputs and outputs using JSON-Schema.
The source
document is currently at
https://github.com/radix/heat/__raw/as-api-spike/autoscaling.__apibp
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them
from the
token?
- how webhooks are done (though this shouldn't affect the API
too much;
they're basically just opaque)

Please read and comment :)


Hi Chistopher

In the group create object you have 'resources'.
Can you explain what you expect in there? I thought we talked at
summit about have a unit of scaling as a nested stack.

The thinking here was:
- this makes the new config stuff easier to scale (config get applied
  per scaling stack)
- you can potentially place notification resources in the scaling
  stack (think marconi message resource - on-create it sends a
  message)
- no need for a launchconfig
- you can place a LoadbalancerMember resource in the scaling stack
  that triggers the loadbalancer to add/remove it from the lb.

I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of resources (a mapping of
resources), just have resource, which can be the template definition
for a single resource. This would then allow the user to specify a Stack
resource if they want to provide multiple resources. How does that sound?


My thought was this (digging into the implementation here a bit):

- Basically, the autoscaling code works as it does now: creates a 
template containing OS::Nova::Server resources (changed from 
AWS::EC2::Instance), with the properties obtained from the LaunchConfig, 
and creates a stack in Heat.
- LaunchConfig can now contain any properties you like (I'm not 100% 
sure about this one*).
- The user optionally supplies a template. If the template is supplied, 
it is passed to Heat and set in the environment as the provider for the 
OS::Nova::Server resource.


This should require no substantive changes to the code since it uses 
existing abstractions, it makes the common case the default, and it 
avoids the overhead of nested stacks in the default case.


cheers,
Zane.

* One thing the existing LaunchConfig does is steer you in the direction 
of not doing things that won't work - e.g. you can't specify a volume to 
attach to the server, because you can't attach a single boot volume to 
multiple servers. The way to do that correctly will be to include the 
volume in the provider template. So maybe we should define a set of 
allowed properties for the LaunchConfig, and make people hard-code 
anything else they want to do in the provider template, just to make it 
harder to do wrong things. I'm worried that would make composition in 
general harder though.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Zane Bitter

On 14/11/13 19:58, Christopher Armstrong wrote:

On Thu, Nov 14, 2013 at 10:44 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 14/11/13 18:51, Randall Burt wrote:

Perhaps, but I also miss important information as a legitimate
caller as
to whether or not my scaling action actually happened or I've been a
little too aggressive with my curl commands. The fact that I get
anything other than 404 (which the spec returns if its not a
legit hook)
means I've found *something* and can simply call it endlessly in
a loop
causing havoc. Perhaps the web hooks *should* be authenticated? This
seems like a pretty large hole to me, especially if I can max
someone's
resources by guessing the right url.


Web hooks MUST be authenticated.



Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?


Maybe an X-Auth-Token, though in many cases I imagine it would be 
derived from a Trust. In any event, it should be something provided by 
Keystone because that is where authentication implementations belong in 
OpenStack.



The idea was that webhooks are secret (and should generally only be
passed around through automated systems, not with human interaction).
This is usually how webhooks work, and it's actually how they work now
in Heat -- even though there's a lot of posturing about signed requests
and so forth, in the end they are literally just secret URLs that give
you the capability to perform some operation (if you have the URL, you
don't need anything else to execute them). I think we should simplify
this to to just be a random revokable blob.


This is the weakest possible form of security - the whole secret gets 
passed on the wire for every request and logged in innumerable places. 
There's no protection at all against replay attacks (other than, 
hopefully, SSL).


A signature, a timestamp and a nonce all seem like prudent precautions 
to add.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Steven Hardy
On Fri, Nov 15, 2013 at 11:16:19AM +0100, Zane Bitter wrote:
 On 14/11/13 19:58, Christopher Armstrong wrote:
 On 14/11/13 18:51, Randall Burt wrote:
 
 Perhaps, but I also miss important information as a legitimate
 caller as
 to whether or not my scaling action actually happened or I've been a
 little too aggressive with my curl commands. The fact that I get
 anything other than 404 (which the spec returns if its not a
 legit hook)
 means I've found *something* and can simply call it endlessly in
 a loop
 causing havoc. Perhaps the web hooks *should* be authenticated? This
 seems like a pretty large hole to me, especially if I can max
 someone's
 resources by guessing the right url.
 
 
 Web hooks MUST be authenticated.
 
 
 
 Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?
 
 Maybe an X-Auth-Token, though in many cases I imagine it would be
 derived from a Trust. In any event, it should be something provided
 by Keystone because that is where authentication implementations
 belong in OpenStack.
 
 The idea was that webhooks are secret (and should generally only be
 passed around through automated systems, not with human interaction).
 This is usually how webhooks work, and it's actually how they work now
 in Heat -- even though there's a lot of posturing about signed requests
 and so forth, in the end they are literally just secret URLs that give
 you the capability to perform some operation (if you have the URL, you
 don't need anything else to execute them). I think we should simplify
 this to to just be a random revokable blob.
 
 This is the weakest possible form of security - the whole secret
 gets passed on the wire for every request and logged in innumerable
 places. There's no protection at all against replay attacks (other
 than, hopefully, SSL).
 
 A signature, a timestamp and a nonce all seem like prudent
 precautions to add.

So maybe we just use tokens and drop the whole pre-signed URL thing -
ceilometer can obtain a token, and call the AS API via the normal method
(i.e a call to a client lib, providing a token)

The main case where tokens are inconvenient is in-instance, where we'll
have to refresh them before they expire (24 hours by default), but
in-instance agents won't talk to the AS API directly, so why don't we just
simplify the discussion and say the AS API has to use normal token auth?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Zane Bitter

On 15/11/13 11:58, Steven Hardy wrote:

On Fri, Nov 15, 2013 at 11:16:19AM +0100, Zane Bitter wrote:

On 14/11/13 19:58, Christopher Armstrong wrote:

On 14/11/13 18:51, Randall Burt wrote:

Perhaps, but I also miss important information as a legitimate
caller as
to whether or not my scaling action actually happened or I've been a
little too aggressive with my curl commands. The fact that I get
anything other than 404 (which the spec returns if its not a
legit hook)
means I've found *something* and can simply call it endlessly in
a loop
causing havoc. Perhaps the web hooks *should* be authenticated? This
seems like a pretty large hole to me, especially if I can max
someone's
resources by guessing the right url.


Web hooks MUST be authenticated.



Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?


Maybe an X-Auth-Token, though in many cases I imagine it would be
derived from a Trust. In any event, it should be something provided
by Keystone because that is where authentication implementations
belong in OpenStack.


The idea was that webhooks are secret (and should generally only be
passed around through automated systems, not with human interaction).
This is usually how webhooks work, and it's actually how they work now
in Heat -- even though there's a lot of posturing about signed requests
and so forth, in the end they are literally just secret URLs that give
you the capability to perform some operation (if you have the URL, you
don't need anything else to execute them). I think we should simplify
this to to just be a random revokable blob.


This is the weakest possible form of security - the whole secret
gets passed on the wire for every request and logged in innumerable
places. There's no protection at all against replay attacks (other
than, hopefully, SSL).

A signature, a timestamp and a nonce all seem like prudent
precautions to add.


So maybe we just use tokens and drop the whole pre-signed URL thing -
ceilometer can obtain a token, and call the AS API via the normal method
(i.e a call to a client lib, providing a token)

The main case where tokens are inconvenient is in-instance, where we'll
have to refresh them before they expire (24 hours by default), but
in-instance agents won't talk to the AS API directly, so why don't we just
simplify the discussion and say the AS API has to use normal token auth?


+1. Not having PKI sucks, but using the standard Keystone mechanisms 
like this leaves the autoscaling API no more exposed than any other in 
OpenStack.


I guess Ceilometer would have to acquire a trust from the user in order 
to generate tokens for this callback?


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Christopher Armstrong
On Fri, Nov 15, 2013 at 4:16 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/11/13 19:58, Christopher Armstrong wrote:

 On Thu, Nov 14, 2013 at 10:44 AM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:

 On 14/11/13 18:51, Randall Burt wrote:

 Perhaps, but I also miss important information as a legitimate
 caller as
 to whether or not my scaling action actually happened or I've
 been a
 little too aggressive with my curl commands. The fact that I get
 anything other than 404 (which the spec returns if its not a
 legit hook)
 means I've found *something* and can simply call it endlessly in
 a loop
 causing havoc. Perhaps the web hooks *should* be authenticated?
 This
 seems like a pretty large hole to me, especially if I can max
 someone's
 resources by guessing the right url.


 Web hooks MUST be authenticated.



 Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?


 Maybe an X-Auth-Token, though in many cases I imagine it would be derived
 from a Trust. In any event, it should be something provided by Keystone
 because that is where authentication implementations belong in OpenStack.


  The idea was that webhooks are secret (and should generally only be
 passed around through automated systems, not with human interaction).
 This is usually how webhooks work, and it's actually how they work now
 in Heat -- even though there's a lot of posturing about signed requests
 and so forth, in the end they are literally just secret URLs that give
 you the capability to perform some operation (if you have the URL, you
 don't need anything else to execute them). I think we should simplify
 this to to just be a random revokable blob.


 This is the weakest possible form of security - the whole secret gets
 passed on the wire for every request and logged in innumerable places.
 There's no protection at all against replay attacks (other than, hopefully,
 SSL).

 A signature, a timestamp and a nonce all seem like prudent precautions to
 add.


I can get behind the idea of adding timestamp and nonce + signature for the
webhooks, as long as they're handled better than they are now :) i.e., the
webhook handler should assert that the timestamp is recent and
non-repeated. This probably means storing stuff in MySQL (or a centralized
in-memory DB). My understanding is that even though we have signed URLs for
webhooks in the current Heat autoscaling system, they're effectively just
static blobs.

My original proposal for simple webhooks was based entirely around the idea
that the current stuff is too complex, and offers no additional security
over a random string jammed into a URL. (signing a static random string
doesn't make it more guessable than the original random string...)

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Zane Bitter

On 14/11/13 19:53, Christopher Armstrong wrote:

Thanks for the comments, Zane.


On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:
A few comments...

#1 thing is that the launch configuration needs to be somehow
represented. In general we want the launch configuration to be a
provider template, but we'll want to create a shortcut for the
obvious case of just scaling servers. Maybe we pass a provider
template (or URL) as well as parameters, and the former is optional.


I'm a little unclear as to what point you're making here. Right now, the
launch configuration is specified in the scaling group by the
resources property of the request json body. It's not a full template,
but just a snippet of a set of resources you want scaled.


Right, and this has a couple of effects, particularly for Heat:
1) You can't share a single launch config between scaling groups - this 
hurts composability of templates.
2) The AWS::EC2::Launch config wouldn't correspond to a real API, so we 
would have to continue to implement it using the current hack.


Fixing (2) is one of my top two reasons for even having an autoscaling API.


As an aside, maybe we should replace this with a singlular resource
and allow people to use a Stack resource if they want to represent
multiple resources.

I guess we can have a simpler API for using an old-style,
server-specific launch configuration, but I am skeptical of the
benefit, since specifying a single Instance resource is pretty simple.


See my other message for implementation suggestion.


I'm not sure I understand the webhooks part... webhook-exec is the
thing that e.g. Ceilometer will use to signal an alarm, right? Why
is it not called something like
/groups/{group_id}/policies/{__policy_id}/alarm ? (Maybe because it
requires different auth middleware? Or does it?)


Mostly because it's unnecessary. The idea was to generate a secret,
opaque, revokable ID that maps to the specific policy.
Â


Seems like it would be nice to look at the webhook URL and be able to 
figure out what it's for. I disagree that a secret URL is sufficient 
here, but even if it were it could be something like:


/groups/{group_id}/policies/{policy_name}/alarm/{secret_code}



And the other ones are setting up the notification actions? Can we
call them notifications instead of webhooks? (After all, in the
future we will probably want to add Marconi support, and maybe even
Mistral support.) And why are these attached to the policy? Isn't
the notification connected to changes in the group, rather than
anything specific to the policy? Am I misunderstanding how this
works? What is the difference between 'uri' and 'capability_uri'?



Policies represent ways to change a group (add +5% to this group).
Webhooks execute policies.

A capability URI is a URI which represents a capability to do
something all by itself. capability_uri would be the webhook-exec thing.
The regular URI would be the thing under
/groups/{group_id}/policies/{policy_id}/webhooks. That URI needs to
exist so you can perform the DELETE operation on it. (but you can't
DELETE the capability_uri, only POST to it to execute the policy).
Â


Oh, I was misunderstanding... this doesn't set up the notifications, it 
allows you to create and revoke multiple webhook URLs for the alarms.


I have reservations about this whole area.


I'll think more about webhooks vs notifications.


Seems like a way to configure the notifications is missing altogether.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-15 Thread Mike Spreitzer
Zane Bitter zbit...@redhat.com wrote on 11/14/2013 12:56:22 PM:

 ...
 My 2c: the way I designed the Heat API was such that extant stacks can 
 be addressed uniquely by name. Humans are pretty good with names, not so 

 much with 128 bit numbers. The consequences of this for the design were:
   - names must be unique per-tenant
   - the tenant-id appears in the endpoint URL
 
 However, the rest of OpenStack seems to have gone in a direction where 
 the name is really just a comment field, everything is addressed only 
 by UUID. A consequence of this is that it renders the tenant-id in the 
 URL pointless, so many projects are removing it.
 
 Unfortunately, one result is that if you create a resource and e.g. miss 

 the Created response for any reason and thus do not have the UUID, there 

 is now no safe, general automated way to delete it again. (There are 
 obviously heuristics you could try.) To solve this problem, there is a 
 proposal floating about for clients to provide another unique ID when 
 making the request, which would render a retry of the request 
 idempotent. That's insufficient, though, because if you decide to roll 
 back instead of retry you still need a way to delete using only this ID.
 
 So basically, that design sucks for both humans (who have to remember 
 UUIDs instead of names) and machines (Heat). However, it appears that I 
 am in a minority of one on this point, so take it with a grain of salt.

I have been thinking about this too.  I tried to convince my group that we 
should give up on assigning UUIDs in our system, and rather make it the 
client's problem to assign the unique ID of what corresponds to a Heat 
stack.  Just use one unique ID, supplied by the client.  Simple, clean, 
and it hurts most peoples' heads.  Biggest concern was: how are the 
clients going to be sure they do not mess up?  That does not seem tough to 
me.  However, there is a less demanding approach.  Introduce an operation 
in the API that allocates the stack's unique ID.  It does nothing else for 
a stack, just returns the unique ID.  If the reply makes it back into the 
client's persistent store, all is well.  If not, the only thing that has 
been wasted is an ID; an unused ID can be reaped after a satisfyingly long 
period of time --- and if even that was too soon then the problem is 
easily detected and recovered from.


 ... webhooks ...

So if we want to do this right, it has to go something like the following, 
right?  The client has to create a trust for the thing that is going to 
invoke the webhook; using that, the webhook invocation can be properly 
authorized.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of Markdown)
and provides schemas for inputs and outputs using JSON-Schema. The source
document is currently at
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)

Please read and comment :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?

Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
randall.b...@rackspace.comwrote:

  Good stuff! Some questions/comments:

  If web hooks are associated with policies and policies are independent
 entities, how does a web hook specify the scaling group to act on? Does
 calling the web hook activate the policy on every associated scaling group?


Not sure what you mean by policies are independent entities. You may have
missed that the policy resource lives hierarchically under the group
resource. Policies are strictly associated with one scaling group, so when
a policy is executed (via a webhook), it's acting on the scaling group that
the policy is associated with.



  Regarding web hook execution and cool down, I think the response should
 be something like 307 if the hook is on cool down with an appropriate
 retry-after header.


Indicating whether a webhook was found or whether it actually executed
anything may be an information leak, since webhook URLs require no
additional authentication other than knowledge of the URL itself.
Responding with only 202 means that people won't be able to guess at random
URLs and know when they've found one.



  On Nov 14, 2013, at 10:57 AM, Randall Burt randall.b...@rackspace.com
  wrote:


  On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
 chris.armstr...@rackspace.com
  wrote:

  http://docs.heatautoscale.apiary.io/

  I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of Markdown)
 and provides schemas for inputs and outputs using JSON-Schema. The source
 document is currently at
 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


  Things we still need to figure out:

  - how to scope projects/domains. put them in the URL? get them from the
 token?


  This may be moot considering the latest from the keystone devs regarding
 token scoping to domains/projects. Basically, a token is scoped to a single
 domain/project from what I understood, so domain/project is implicit. I'm
 still of the mind that the tenant doesn't belong so early in the URI, since
 we can already surmise the actual tenant from the authentication context,
 but that's something for Openstack at large to agree on.

  - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)

  Please read and comment :)


  --
  IRC: radix
 Christopher Armstrong
 Rackspace
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 11:30 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com wrote:
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?


Not sure what you mean by policies are independent entities. You may have 
missed that the policy resource lives hierarchically under the group resource. 
Policies are strictly associated with one scaling group, so when a policy is 
executed (via a webhook), it's acting on the scaling group that the policy is 
associated with.

Whoops. Yeah, I missed that.



Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

Indicating whether a webhook was found or whether it actually executed anything 
may be an information leak, since webhook URLs require no additional 
authentication other than knowledge of the URL itself. Responding with only 202 
means that people won't be able to guess at random URLs and know when they've 
found one.

Perhaps, but I also miss important information as a legitimate caller as to 
whether or not my scaling action actually happened or I've been a little too 
aggressive with my curl commands. The fact that I get anything other than 404 
(which the spec returns if its not a legit hook) means I've found *something* 
and can simply call it endlessly in a loop causing havoc. Perhaps the web hooks 
*should* be authenticated? This seems like a pretty large hole to me, 
especially if I can max someone's resources by guessing the right url.


On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Zane Bitter

On 14/11/13 17:19, Christopher Armstrong wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of
Markdown) and provides schemas for inputs and outputs using JSON-Schema.
The source document is currently atÂ
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)


My 2c: the way I designed the Heat API was such that extant stacks can 
be addressed uniquely by name. Humans are pretty good with names, not so 
much with 128 bit numbers. The consequences of this for the design were:

 - names must be unique per-tenant
 - the tenant-id appears in the endpoint URL

However, the rest of OpenStack seems to have gone in a direction where 
the name is really just a comment field, everything is addressed only 
by UUID. A consequence of this is that it renders the tenant-id in the 
URL pointless, so many projects are removing it.


Unfortunately, one result is that if you create a resource and e.g. miss 
the Created response for any reason and thus do not have the UUID, there 
is now no safe, general automated way to delete it again. (There are 
obviously heuristics you could try.) To solve this problem, there is a 
proposal floating about for clients to provide another unique ID when 
making the request, which would render a retry of the request 
idempotent. That's insufficient, though, because if you decide to roll 
back instead of retry you still need a way to delete using only this ID.


So basically, that design sucks for both humans (who have to remember 
UUIDs instead of names) and machines (Heat). However, it appears that I 
am in a minority of one on this point, so take it with a grain of salt.



Please read and comment :)


A few comments...

#1 thing is that the launch configuration needs to be somehow 
represented. In general we want the launch configuration to be a 
provider template, but we'll want to create a shortcut for the obvious 
case of just scaling servers. Maybe we pass a provider template (or URL) 
as well as parameters, and the former is optional.


Successful creates should return 201 Created, not 200 OK.

Responses from creates should include the UUID as well as the URI. 
(Getting into minor details here.)


Policies are scoped within groups, so do they need a unique id or would 
a name do?


I'm not sure I understand the webhooks part... webhook-exec is the thing 
that e.g. Ceilometer will use to signal an alarm, right? Why is it not 
called something like /groups/{group_id}/policies/{policy_id}/alarm ? 
(Maybe because it requires different auth middleware? Or does it?)


And the other ones are setting up the notification actions? Can we call 
them notifications instead of webhooks? (After all, in the future we 
will probably want to add Marconi support, and maybe even Mistral 
support.) And why are these attached to the policy? Isn't the 
notification connected to changes in the group, rather than anything 
specific to the policy? Am I misunderstanding how this works? What is 
the difference between 'uri' and 'capability_uri'?


You need to define PUT/PATCH methods for most of these also, obviously 
(I assume you just want to get this part nailed down first).


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Zane Bitter

On 14/11/13 18:51, Randall Burt wrote:


On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
  wrote:


On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
randall.b...@rackspace.com mailto:randall.b...@rackspace.com wrote:
Regarding web hook execution and cool down, I think the response
should be something like 307 if the hook is on cool down with an
appropriate retry-after header.


I strongly disagree with this even ignoring the security issue mentioned 
below. Being in the cooldown period is NOT an error, and the caller 
should absolutely NOT try again later - the request has been received 
and correctly acted upon (by doing nothing).



Indicating whether a webhook was found or whether it actually executed
anything may be an information leak, since webhook URLs require no
additional authentication other than knowledge of the URL itself.
Responding with only 202 means that people won't be able to guess at
random URLs and know when they've found one.


Perhaps, but I also miss important information as a legitimate caller as
to whether or not my scaling action actually happened or I've been a
little too aggressive with my curl commands. The fact that I get
anything other than 404 (which the spec returns if its not a legit hook)
means I've found *something* and can simply call it endlessly in a loop
causing havoc. Perhaps the web hooks *should* be authenticated? This
seems like a pretty large hole to me, especially if I can max someone's
resources by guessing the right url.


Web hooks MUST be authenticated.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Georgy Okrokvertskhov
Hi,

It would be great if API specs contain a list of attributes\parameters one
can pass during group creation. I believe Zane already asked about
LaunchConfig, but I think new autoscaling API creation was specifically
designed to move from limited AWS ElasticLB to something with more broad
features. There is a BP I submitted while ago
https://blueprints.launchpad.net/heat/+spec/autoscaling-instancse-typization.
We discussed it in IRC chat with heat team and we got to the conclusion
that this will be supported in new autoscaling API. Probably it is already
supported, but it is quite hard to figure this out from the existing API
specs without examples.

Thanks
Georgy


On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/11/13 17:19, Christopher Armstrong wrote:

 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of
 Markdown) and provides schemas for inputs and outputs using JSON-Schema.
 The source document is currently atÂ

 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them from the
 token?
 - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)


 My 2c: the way I designed the Heat API was such that extant stacks can be
 addressed uniquely by name. Humans are pretty good with names, not so much
 with 128 bit numbers. The consequences of this for the design were:
  - names must be unique per-tenant
  - the tenant-id appears in the endpoint URL

 However, the rest of OpenStack seems to have gone in a direction where the
 name is really just a comment field, everything is addressed only by
 UUID. A consequence of this is that it renders the tenant-id in the URL
 pointless, so many projects are removing it.

 Unfortunately, one result is that if you create a resource and e.g. miss
 the Created response for any reason and thus do not have the UUID, there is
 now no safe, general automated way to delete it again. (There are obviously
 heuristics you could try.) To solve this problem, there is a proposal
 floating about for clients to provide another unique ID when making the
 request, which would render a retry of the request idempotent. That's
 insufficient, though, because if you decide to roll back instead of retry
 you still need a way to delete using only this ID.

 So basically, that design sucks for both humans (who have to remember
 UUIDs instead of names) and machines (Heat). However, it appears that I am
 in a minority of one on this point, so take it with a grain of salt.


  Please read and comment :)


 A few comments...

 #1 thing is that the launch configuration needs to be somehow represented.
 In general we want the launch configuration to be a provider template, but
 we'll want to create a shortcut for the obvious case of just scaling
 servers. Maybe we pass a provider template (or URL) as well as parameters,
 and the former is optional.

 Successful creates should return 201 Created, not 200 OK.

 Responses from creates should include the UUID as well as the URI.
 (Getting into minor details here.)

 Policies are scoped within groups, so do they need a unique id or would a
 name do?

 I'm not sure I understand the webhooks part... webhook-exec is the thing
 that e.g. Ceilometer will use to signal an alarm, right? Why is it not
 called something like /groups/{group_id}/policies/{policy_id}/alarm ?
 (Maybe because it requires different auth middleware? Or does it?)

 And the other ones are setting up the notification actions? Can we call
 them notifications instead of webhooks? (After all, in the future we will
 probably want to add Marconi support, and maybe even Mistral support.) And
 why are these attached to the policy? Isn't the notification connected to
 changes in the group, rather than anything specific to the policy? Am I
 misunderstanding how this works? What is the difference between 'uri' and
 'capability_uri'?

 You need to define PUT/PATCH methods for most of these also, obviously (I
 assume you just want to get this part nailed down first).

 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
Thanks for the comments, Zane.


On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/11/13 17:19, Christopher Armstrong wrote:

 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of
 Markdown) and provides schemas for inputs and outputs using JSON-Schema.
 The source document is currently atÂ

 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them from the
 token?
 - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)


 My 2c: the way I designed the Heat API was such that extant stacks can be
 addressed uniquely by name. Humans are pretty good with names, not so much
 with 128 bit numbers. The consequences of this for the design were:
  - names must be unique per-tenant
  - the tenant-id appears in the endpoint URL

 However, the rest of OpenStack seems to have gone in a direction where the
 name is really just a comment field, everything is addressed only by
 UUID. A consequence of this is that it renders the tenant-id in the URL
 pointless, so many projects are removing it.

 Unfortunately, one result is that if you create a resource and e.g. miss
 the Created response for any reason and thus do not have the UUID, there is
 now no safe, general automated way to delete it again. (There are obviously
 heuristics you could try.) To solve this problem, there is a proposal
 floating about for clients to provide another unique ID when making the
 request, which would render a retry of the request idempotent. That's
 insufficient, though, because if you decide to roll back instead of retry
 you still need a way to delete using only this ID.

 So basically, that design sucks for both humans (who have to remember
 UUIDs instead of names) and machines (Heat). However, it appears that I am
 in a minority of one on this point, so take it with a grain of salt.


  Please read and comment :)


 A few comments...

 #1 thing is that the launch configuration needs to be somehow represented.
 In general we want the launch configuration to be a provider template, but
 we'll want to create a shortcut for the obvious case of just scaling
 servers. Maybe we pass a provider template (or URL) as well as parameters,
 and the former is optional.


I'm a little unclear as to what point you're making here. Right now, the
launch configuration is specified in the scaling group by the resources
property of the request json body. It's not a full template, but just a
snippet of a set of resources you want scaled.

As an aside, maybe we should replace this with a singlular resource and
allow people to use a Stack resource if they want to represent multiple
resources.

I guess we can have a simpler API for using an old-style, server-specific
launch configuration, but I am skeptical of the benefit, since specifying
a single Instance resource is pretty simple.



 Successful creates should return 201 Created, not 200 OK.


Okay, I'll update that. I think I also forgot to specify some success
responses for things that need them.



 Responses from creates should include the UUID as well as the URI.
 (Getting into minor details here.)


Okay.


 Policies are scoped within groups, so do they need a unique id or would a
 name do?


I guess we could get rid of the ID and only have a name, what do other
people think?



 I'm not sure I understand the webhooks part... webhook-exec is the thing
 that e.g. Ceilometer will use to signal an alarm, right? Why is it not
 called something like /groups/{group_id}/policies/{policy_id}/alarm ?
 (Maybe because it requires different auth middleware? Or does it?)


Mostly because it's unnecessary. The idea was to generate a secret, opaque,
revokable ID that maps to the specific policy.



 And the other ones are setting up the notification actions? Can we call
 them notifications instead of webhooks? (After all, in the future we will
 probably want to add Marconi support, and maybe even Mistral support.) And
 why are these attached to the policy? Isn't the notification connected to
 changes in the group, rather than anything specific to the policy? Am I
 misunderstanding how this works? What is the difference between 'uri' and
 'capability_uri'?



Policies represent ways to change a group (add +5% to this group).
Webhooks execute policies.

A capability URI is a URI which represents a capability to do something
all by itself. capability_uri would be the webhook-exec thing. The regular
URI would be the thing under
/groups/{group_id}/policies/{policy_id}/webhooks. That URI needs to exist
so you can perform the DELETE operation on it. (but you can't DELETE the
capability_uri, only POST to it to execute the policy).


I'll think more about webhooks vs notifications.


 You need to define PUT/PATCH 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 10:46 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 It would be great if API specs contain a list of attributes\parameters one
 can pass during group creation. I believe Zane already asked about
 LaunchConfig, but I think new autoscaling API creation was specifically
 designed to move from limited AWS ElasticLB to something with more broad
 features. There is a BP I submitted while ago
 https://blueprints.launchpad.net/heat/+spec/autoscaling-instancse-typization.
 We discussed it in IRC chat with heat team and we got to the conclusion
 that this will be supported in new autoscaling API. Probably it is already
 supported, but it is quite hard to figure this out from the existing API
 specs without examples.



The API spec does contain a list of attributes/parameters that you can pass
during group creation (and all the other operations) -- see the Schema
sections under each. In case you didn't notice, you can click on each
action to expand details under it.



 Thanks
 Georgy


 On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/11/13 17:19, Christopher Armstrong wrote:

 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of
 Markdown) and provides schemas for inputs and outputs using JSON-Schema.
 The source document is currently atÂ

 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them from the
 token?
 - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)


 My 2c: the way I designed the Heat API was such that extant stacks can be
 addressed uniquely by name. Humans are pretty good with names, not so much
 with 128 bit numbers. The consequences of this for the design were:
  - names must be unique per-tenant
  - the tenant-id appears in the endpoint URL

 However, the rest of OpenStack seems to have gone in a direction where
 the name is really just a comment field, everything is addressed only by
 UUID. A consequence of this is that it renders the tenant-id in the URL
 pointless, so many projects are removing it.

 Unfortunately, one result is that if you create a resource and e.g. miss
 the Created response for any reason and thus do not have the UUID, there is
 now no safe, general automated way to delete it again. (There are obviously
 heuristics you could try.) To solve this problem, there is a proposal
 floating about for clients to provide another unique ID when making the
 request, which would render a retry of the request idempotent. That's
 insufficient, though, because if you decide to roll back instead of retry
 you still need a way to delete using only this ID.

 So basically, that design sucks for both humans (who have to remember
 UUIDs instead of names) and machines (Heat). However, it appears that I am
 in a minority of one on this point, so take it with a grain of salt.


  Please read and comment :)


 A few comments...

 #1 thing is that the launch configuration needs to be somehow
 represented. In general we want the launch configuration to be a provider
 template, but we'll want to create a shortcut for the obvious case of just
 scaling servers. Maybe we pass a provider template (or URL) as well as
 parameters, and the former is optional.

 Successful creates should return 201 Created, not 200 OK.

 Responses from creates should include the UUID as well as the URI.
 (Getting into minor details here.)

 Policies are scoped within groups, so do they need a unique id or would a
 name do?

 I'm not sure I understand the webhooks part... webhook-exec is the thing
 that e.g. Ceilometer will use to signal an alarm, right? Why is it not
 called something like /groups/{group_id}/policies/{policy_id}/alarm ?
 (Maybe because it requires different auth middleware? Or does it?)

 And the other ones are setting up the notification actions? Can we call
 them notifications instead of webhooks? (After all, in the future we will
 probably want to add Marconi support, and maybe even Mistral support.) And
 why are these attached to the policy? Isn't the notification connected to
 changes in the group, rather than anything specific to the policy? Am I
 misunderstanding how this works? What is the difference between 'uri' and
 'capability_uri'?

 You need to define PUT/PATCH methods for most of these also, obviously (I
 assume you just want to get this part nailed down first).

 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 10:44 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/11/13 18:51, Randall Burt wrote:


 On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
 chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
   wrote:

  On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
 randall.b...@rackspace.com mailto:randall.b...@rackspace.com wrote:
 Regarding web hook execution and cool down, I think the response
 should be something like 307 if the hook is on cool down with an
 appropriate retry-after header.


 I strongly disagree with this even ignoring the security issue mentioned
 below. Being in the cooldown period is NOT an error, and the caller should
 absolutely NOT try again later - the request has been received and
 correctly acted upon (by doing nothing).


Yeah, I think it's fine to just let it always be 202. Also, they don't
actually return 404  when they don't exist -- I had that in an earlier
version of the spec but I thought I deleted it before posting it to this
list.




  Indicating whether a webhook was found or whether it actually executed
 anything may be an information leak, since webhook URLs require no
 additional authentication other than knowledge of the URL itself.
 Responding with only 202 means that people won't be able to guess at
 random URLs and know when they've found one.


 Perhaps, but I also miss important information as a legitimate caller as
 to whether or not my scaling action actually happened or I've been a
 little too aggressive with my curl commands. The fact that I get
 anything other than 404 (which the spec returns if its not a legit hook)
 means I've found *something* and can simply call it endlessly in a loop
 causing havoc. Perhaps the web hooks *should* be authenticated? This
 seems like a pretty large hole to me, especially if I can max someone's
 resources by guessing the right url.


 Web hooks MUST be authenticated.



Do you mean they should have an X-Auth-Token passed? Or an X-Trust-ID?

The idea was that webhooks are secret (and should generally only be passed
around through automated systems, not with human interaction). This is
usually how webhooks work, and it's actually how they work now in Heat --
even though there's a lot of posturing about signed requests and so forth,
in the end they are literally just secret URLs that give you the capability
to perform some operation (if you have the URL, you don't need anything
else to execute them). I think we should simplify this to to just be a
random revokable blob.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 12:44 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 14/11/13 18:51, Randall Burt wrote:
 
 On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
 chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
  wrote:
 
 On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
 randall.b...@rackspace.com mailto:randall.b...@rackspace.com wrote:
Regarding web hook execution and cool down, I think the response
should be something like 307 if the hook is on cool down with an
appropriate retry-after header.
 
 I strongly disagree with this even ignoring the security issue mentioned 
 below. Being in the cooldown period is NOT an error, and the caller should 
 absolutely NOT try again later - the request has been received and correctly 
 acted upon (by doing nothing).

But how do I know nothing was done? I may have very good reasons to re-scale 
outside of ceilometer or other mechanisms and absolutely SHOULD try again 
later.  As it stands, I have no way of knowing that my scaling action didn't 
happen without examining my physical resources. 307 is a legitimate response in 
these cases, but I'm certainly open to other suggestions.

 
 Indicating whether a webhook was found or whether it actually executed
 anything may be an information leak, since webhook URLs require no
 additional authentication other than knowledge of the URL itself.
 Responding with only 202 means that people won't be able to guess at
 random URLs and know when they've found one.
 
 Perhaps, but I also miss important information as a legitimate caller as
 to whether or not my scaling action actually happened or I've been a
 little too aggressive with my curl commands. The fact that I get
 anything other than 404 (which the spec returns if its not a legit hook)
 means I've found *something* and can simply call it endlessly in a loop
 causing havoc. Perhaps the web hooks *should* be authenticated? This
 seems like a pretty large hole to me, especially if I can max someone's
 resources by guessing the right url.
 
 Web hooks MUST be authenticated.
 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:00 AM, Randall Burt
randall.b...@rackspace.comwrote:


 On Nov 14, 2013, at 12:44 PM, Zane Bitter zbit...@redhat.com
  wrote:

  On 14/11/13 18:51, Randall Burt wrote:
 
  On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
  chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
   wrote:
 
  On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
  randall.b...@rackspace.com mailto:randall.b...@rackspace.com
 wrote:
 Regarding web hook execution and cool down, I think the response
 should be something like 307 if the hook is on cool down with an
 appropriate retry-after header.
 
  I strongly disagree with this even ignoring the security issue mentioned
 below. Being in the cooldown period is NOT an error, and the caller should
 absolutely NOT try again later - the request has been received and
 correctly acted upon (by doing nothing).

 But how do I know nothing was done? I may have very good reasons to
 re-scale outside of ceilometer or other mechanisms and absolutely SHOULD
 try again later.  As it stands, I have no way of knowing that my scaling
 action didn't happen without examining my physical resources. 307 is a
 legitimate response in these cases, but I'm certainly open to other
 suggestions.


I agree there should be a way to find out what happened, but in a way that
requires a more strongly authenticated request. My preference would be to
use an audit log system (I haven't been keeping up with the current
thoughts on the design for Heat's event/log API) that can be inspected via
API.


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 12:52 PM, Randall Burt
randall.b...@rackspace.comwrote:


  On Nov 14, 2013, at 1:05 PM, Christopher Armstrong 
 chris.armstr...@rackspace.com wrote:

  On Thu, Nov 14, 2013 at 11:00 AM, Randall Burt 
 randall.b...@rackspace.com wrote:


 On Nov 14, 2013, at 12:44 PM, Zane Bitter zbit...@redhat.com
  wrote:

  On 14/11/13 18:51, Randall Burt wrote:
 
  On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
  chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
   wrote:
 
  On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
  randall.b...@rackspace.com mailto:randall.b...@rackspace.com
 wrote:
 Regarding web hook execution and cool down, I think the response
 should be something like 307 if the hook is on cool down with an
 appropriate retry-after header.
 
  I strongly disagree with this even ignoring the security issue
 mentioned below. Being in the cooldown period is NOT an error, and the
 caller should absolutely NOT try again later - the request has been
 received and correctly acted upon (by doing nothing).

  But how do I know nothing was done? I may have very good reasons to
 re-scale outside of ceilometer or other mechanisms and absolutely SHOULD
 try again later.  As it stands, I have no way of knowing that my scaling
 action didn't happen without examining my physical resources. 307 is a
 legitimate response in these cases, but I'm certainly open to other
 suggestions.


  I agree there should be a way to find out what happened, but in a way
 that requires a more strongly authenticated request. My preference would be
 to use an audit log system (I haven't been keeping up with the current
 thoughts on the design for Heat's event/log API) that can be inspected via
 API.


  Fair enough. I'm just thinking of folks who want to set this up but use
 external tools/monitoring solutions for the actual eventing. Having those
 tools grep through event logs seems a tad cumbersome, but I do understand
 the desire to make these un-authenticated secrets makes that terribly
 difficult.


Calling it unauthenticated might be a bit misleading; it's authenticated
by the knowledge of the URL (which implies a trust and policy to execute).


-- 
Christopher Armstrong
http://radix.twistedmatrix.com/
http://planet-if.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Angus Salkeld

On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of Markdown)
and provides schemas for inputs and outputs using JSON-Schema. The source
document is currently at
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)

Please read and comment :)



Hi Chistopher

In the group create object you have 'resources'.
Can you explain what you expect in there? I thought we talked at
summit about have a unit of scaling as a nested stack.

The thinking here was:
- this makes the new config stuff easier to scale (config get applied
  per scaling stack)
- you can potentially place notification resources in the scaling
  stack (think marconi message resource - on-create it sends a
  message)
- no need for a launchconfig
- you can place a LoadbalancerMember resource in the scaling stack
  that triggers the loadbalancer to add/remove it from the lb.

I guess what I am saying is I'd expect an api to a nested stack.

-Angus



--
IRC: radix
Christopher Armstrong
Rackspace



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 5:40 PM, Angus Salkeld asalk...@redhat.com wrote:

 On 14/11/13 10:19 -0600, Christopher Armstrong wrote:

 http://docs.heatautoscale.apiary.io/

 I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of
 Markdown)
 and provides schemas for inputs and outputs using JSON-Schema. The source
 document is currently at
 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


 Things we still need to figure out:

 - how to scope projects/domains. put them in the URL? get them from the
 token?
 - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)

 Please read and comment :)


 Hi Chistopher

 In the group create object you have 'resources'.
 Can you explain what you expect in there? I thought we talked at
 summit about have a unit of scaling as a nested stack.

 The thinking here was:
 - this makes the new config stuff easier to scale (config get applied
   per scaling stack)
 - you can potentially place notification resources in the scaling
   stack (think marconi message resource - on-create it sends a
   message)
 - no need for a launchconfig
 - you can place a LoadbalancerMember resource in the scaling stack
   that triggers the loadbalancer to add/remove it from the lb.

 I guess what I am saying is I'd expect an api to a nested stack.


Well, what I'm thinking now is that instead of resources (a mapping of
resources), just have resource, which can be the template definition for
a single resource. This would then allow the user to specify a Stack
resource if they want to provide multiple resources. How does that sound?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Steve Baker
On 11/15/2013 02:48 PM, Christopher Armstrong wrote:

 The thinking here was:
 - this makes the new config stuff easier to scale (config get applied
   per scaling stack)
 - you can potentially place notification resources in the scaling
   stack (think marconi message resource - on-create it sends a
   message)
 - no need for a launchconfig
 - you can place a LoadbalancerMember resource in the scaling stack
   that triggers the loadbalancer to add/remove it from the lb.

 I guess what I am saying is I'd expect an api to a nested stack.


 Well, what I'm thinking now is that instead of resources (a mapping
 of resources), just have resource, which can be the template
 definition for a single resource. This would then allow the user to
 specify a Stack resource if they want to provide multiple resources.
 How does that sound?
As long as this stack can be specified by URL *or* as an inline template
I would be happy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev