Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Steve Baker
On 11/21/2013 08:48 PM, Thomas Spatzier wrote:
 Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 00:04
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 11:41 AM, Clint Byrum wrote:
 Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
 Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
 snip

 I am worried about the explosion of possibilities that comes from
 trying
 to deal with all of the diff's possible inside an instance. If there is
 an
 actual REST interface for a thing, then yes, let's use that. For
 instance,
 if we are using docker, there is in fact a very straight forward way to
 say remove entity X. If we are using packages we have the same thing.
 However, if we are just trying to write chef configurations, we have to
 write reverse chef configurations.

 What I meant to convey is let's give this piece of the interface a lot
 of
 thought. Not this is wrong to even have. Given a couple of days now,
 I think we do need apply and remove. We should also provide really
 solid example templates for this concept.
 You're right, I'm already starting to see issues with my current
 approach.
 This smells like a new blueprint. I'll remove it from the scope of the
 current software config work and raise a blueprint to track
 remove-config.

 So I read thru those recent discussions and in parallel also started to
 update the design wiki. BTW, nanjj renamed the wiki to [1] (but also made a
 redirect from the previous ...-WIP page) and linked it as spec to BP [2].

 I'll leave out the remove-config thing for now. While thinking about the
 overall picture, I came up with some other comments:

 I thought about the name SoftwareApplier some more and while it is clear
 what it does (it applies a software config to a server), the naming is not
 really consistent with all the other resources in Heat. Every other
 resource type is called after the thing that you get when the template gets
 instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
 case of SoftwareApplier what you actually get from a user perspective is a
 deployed instance of the piece of software described be a SoftwareConfig.
 Therefore, I was calling it SoftwareDeployment orignally, because you get a
 software deployment (according to a config). Any comments on that name?
SoftwareDeployment is a better name, apart from those 3 extra letters.
I'll rename my POC.  Sorry nannj, you'll need to rename them back ;)

 If we think this thru with respect to remove-config (even though this
 needs more thought), a SoftwareApplier (that thing itself) would not really
 go to state DELETE_IN_PROGRESS during an update. It is always there on the
 VM but the software it deploys gets deleted and then reapplied or
 whatever ...

 Now thinking more about update scenarios (which we can leave for an
 iteration after the initial deployment is working), in my mental model it
 would be more consistent to have information for handle_create,
 handle_delete, handle_update kinds of events all defined in the
 SoftwareConfig resource. SoftwareConfig for represent configuration
 information for one specific piece of software, e.g. a web server. So it
 could provide all the information you need to install it, to uninstall it,
 or to update its config. By updating the SoftwareApplier's (or
 SoftwareDeployment's - my preferred name) state at runtime, the in-instance
 tools would grab the respective script of whatever an run it.

 So SoftwareConfig could look like:

 resources:
   my_webserver_config:
 type: OS::Heat::SoftwareConfig
 properties:
   http_port:
 type: number
   # some more config props

   config_create: http://www.example.com/my_scripts/webserver/install.sh
   config_delete:
 http://www.example.com/my_scripts/webserver/uninstall.sh
   config_update:
 http://www.example.com/my_scripts/webserver/applyupdate.sh


 At runtime, when a SoftwareApplier gets created, it looks for the
 'config_create' hook and triggers that automation. When it gets deleted, it
 looks for the 'config_delete' hook and so on. Only config_create is
 mandatory.
 I think that would also give us nice extensibility for future use cases.
 For example, Heat today does not support something like stop-stack or
 start-stack which would be pretty useful though. If we have it one day, we
 would just add a 'config_start' hook to the SoftwareConfig.


 [1]
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
 [2] https://blueprints.launchpad.net/heat/+spec/hot-software-config

With the caveat that what we're discussing here is a future enhancement...

The problem I see with config_create/config_update/config_delete in a
single SoftwareConfig is that we probably can't assume these 3 scripts
consume the same inputs and produce the same outputs

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Mike Spreitzer
Thomas Spatzier thomas.spatz...@de.ibm.com wrote on 11/21/2013 02:48:14 
AM:
 ...
 Now thinking more about update scenarios (which we can leave for an
 iteration after the initial deployment is working),

I recommend thinking about UPDATE from the start.  We should have an 
implementation in which CREATE and UPDATE share as much mechanism as is 
reasonable, which requires thinking about UPDATE while designing CREATE.

 in my mental model it
 would be more consistent to have information for handle_create,
 handle_delete, handle_update kinds of events all defined in the
 SoftwareConfig resource.

+1 for putting these on the definition instead of the use; I also noted 
this earlier.

-1 for having an update method.  The orientation to idempotent 
forward-progress operations means that we need only one, which handles 
both CREATE and UPDATE.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 21.11.2013 21:19:07:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 21:25
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 08:48 PM, Thomas Spatzier wrote:
  Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 21.11.2013 00:04
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
snip
  I thought about the name SoftwareApplier some more and while it is
clear
  what it does (it applies a software config to a server), the naming is
not
  really consistent with all the other resources in Heat. Every other
  resource type is called after the thing that you get when the template
gets
  instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
  case of SoftwareApplier what you actually get from a user perspective
is a
  deployed instance of the piece of software described be a
SoftwareConfig.
  Therefore, I was calling it SoftwareDeployment orignally, because you
get a
  software deployment (according to a config). Any comments on that name?
 SoftwareDeployment is a better name, apart from those 3 extra letters.
 I'll rename my POC.  Sorry nannj, you'll need to rename them back ;)

Ok, I'll change the name back in the wiki :-)


  If we think this thru with respect to remove-config (even though this
  needs more thought), a SoftwareApplier (that thing itself) would not
really
  go to state DELETE_IN_PROGRESS during an update. It is always there on
the
  VM but the software it deploys gets deleted and then reapplied or
  whatever ...
 
  Now thinking more about update scenarios (which we can leave for an
  iteration after the initial deployment is working), in my mental model
it
  would be more consistent to have information for handle_create,
  handle_delete, handle_update kinds of events all defined in the
  SoftwareConfig resource. SoftwareConfig for represent configuration
  information for one specific piece of software, e.g. a web server. So
it
  could provide all the information you need to install it, to uninstall
it,
  or to update its config. By updating the SoftwareApplier's (or
  SoftwareDeployment's - my preferred name) state at runtime, the
in-instance
  tools would grab the respective script of whatever an run it.
 
  So SoftwareConfig could look like:
 
  resources:
my_webserver_config:
  type: OS::Heat::SoftwareConfig
  properties:
http_port:
  type: number
# some more config props
 
config_create:
http://www.example.com/my_scripts/webserver/install.sh
config_delete:
  http://www.example.com/my_scripts/webserver/uninstall.sh
config_update:
  http://www.example.com/my_scripts/webserver/applyupdate.sh
 
 
  At runtime, when a SoftwareApplier gets created, it looks for the
  'config_create' hook and triggers that automation. When it gets
deleted, it
  looks for the 'config_delete' hook and so on. Only config_create is
  mandatory.
  I think that would also give us nice extensibility for future use
cases.
  For example, Heat today does not support something like stop-stack or
  start-stack which would be pretty useful though. If we have it one day,
we
  would just add a 'config_start' hook to the SoftwareConfig.
 
 
  [1]
 
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
  [2] https://blueprints.launchpad.net/heat/+spec/hot-software-config
 
 With the caveat that what we're discussing here is a future
enhancement...

 The problem I see with config_create/config_update/config_delete in a
 single SoftwareConfig is that we probably can't assume these 3 scripts
 consume the same inputs and produce the same outputs.

We could make it a convention that creators of software configs have to use
the same signature for the automation of create, delete etc. Or at least
input param names must be the same, while some pieces might take a subset
only. E.g. delete will probably take less inputs. This way we could have a
self-contained config.
As you said above, implementation-wise this is probably a future
enhancement, so once we have he config_create handling in place we could
just do a PoC patch on-top and try it out.


 Another option might be to have a separate confg/deployment pair for
 delete workloads, and a property on the deployment resource which states
 which phase the workload is executed in (create or delete).

Yes, this would be an option, but IMO a bit confusing for users. Especially
when I inspect a deployed stack, I would be wondering why there are many
SoftwareDeployment resources hanging around for the same piece of software
installed on a server.


 I'd like to think that special treatment for config_update won't be
 needed at all, since CM tools are supposed to be good at converging to
 whatever you

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
 Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 19.11.2013 21:43
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
 snip
  I think there needs to a CM tool specific agent delivered to the server
  which os-collect-config invokes. This agent will transform the config
  data (input values, CM script, CM specific specialness) to a CM tool
  invocation.
 
  How to define and deliver this agent is the challenge. Some options are:
  1) install it as part of the image customization/bootstrapping (golden
  images or cloud-init)
  2) define a (mustache?) template in the SoftwareConfig which
  os-collect-config transforms into the agent script, which
  os-collect-config then executes
  3) a CM tool specific implementation of SoftwareApplier builds and
  delivers a complete agent to os-collect-config which executes it
 
  I may be leaning towards 3) at the moment. Hopefully any agent can be
  generated with a sufficiently sophisticated base SoftwareApplier type,
  plus maybe some richer intrinsic functions.
 
 This is good summary of options; about the same we had in mind. And we were
 also leaning towards 3. Probably the approach we would take is to get a
 SoftwareApplier running for one CM tool (e.g. Chef), then look at another
 tool (base shell scripts), and then see what the generic parts art that can
 be factored into a base class.
 
   The POC I'm working on is actually backed by a REST API which does
 dumb
   (but structured) storage of SoftwareConfig and SoftwareApplier
 entities.
   This has some interesting implications for managing SoftwareConfig
   resources outside the context of the stack which uses them, but lets
 not
   worry too much about that *yet*.
   Sounds good. We are also defining some blueprints to break down the
 overall
   software config topic. We plan to share them later this week, and then
 we
   can consolidate with your plans and see how we can best join forces.
  
  
  At this point it would be very helpful to spec out how specific CM tools
  are invoked with given inputs, script, and CM tool specific options.
 
 That's our plan; and we would probably start with scripts and chef.
 
 
  Maybe if you start with shell scripts, cfn-init and chef then we can all
  contribute other CM tools like os-config-applier, puppet, ansible,
  saltstack.
 
  Hopefully by then my POC will at least be able to create resources, if
  not deliver some data to servers.
 
 We've been thinking about getting metadata to the in-instance parts on the
 server and whether the resources you are building can serve the purpose.
 I.e. pass and endpoint to the SoftwareConfig resources to the instance and
 let the instance query the metadata from the resource. Sounds like this is
 what you had in mind, so that would be a good point for integrating the
 work. In the meantime, we can think of some shortcuts.
 

Note that os-collect-config is intended to be a light-weight generic
in-instance agent to do exactly this. Watch for Metadata changes, and
feed them to an underlying tool in a predictable interface. I'd hope
that any of the appliers would mostly just configure os-collect-config
to run a wrapper that speaks os-collect-config's interface.

The interface is defined in the README:

https://pypi.python.org/pypi/os-collect-config

It is inevitable that we will extend os-collect-config to be able to
collect config data from whatever API these config applier resources
make available. I would suggest then that we not all go off and reinvent
os-collect-config for each applier, but rather enhance os-collect-config
as needed and write wrappers for the other config tools which implement
its interface.

os-apply-config already understands this interface for obvious reasons.

Bash scripts can use os-apply-config to extract individual values, as
you might see in some of the os-refresh-config scripts that are run as
part of tripleo. I don't think anything further is really needed there.

For chef, some kind of ohai plugin to read os-collect-config's collected
data would make sense.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Steve Baker
On 11/20/2013 09:29 PM, Clint Byrum wrote:
 Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
 Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 19.11.2013 21:43
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 snip
 I think there needs to a CM tool specific agent delivered to the server
 which os-collect-config invokes. This agent will transform the config
 data (input values, CM script, CM specific specialness) to a CM tool
 invocation.

 How to define and deliver this agent is the challenge. Some options are:
 1) install it as part of the image customization/bootstrapping (golden
 images or cloud-init)
 2) define a (mustache?) template in the SoftwareConfig which
 os-collect-config transforms into the agent script, which
 os-collect-config then executes
 3) a CM tool specific implementation of SoftwareApplier builds and
 delivers a complete agent to os-collect-config which executes it

 I may be leaning towards 3) at the moment. Hopefully any agent can be
 generated with a sufficiently sophisticated base SoftwareApplier type,
 plus maybe some richer intrinsic functions.
 This is good summary of options; about the same we had in mind. And we were
 also leaning towards 3. Probably the approach we would take is to get a
 SoftwareApplier running for one CM tool (e.g. Chef), then look at another
 tool (base shell scripts), and then see what the generic parts art that can
 be factored into a base class.

 The POC I'm working on is actually backed by a REST API which does
 dumb
 (but structured) storage of SoftwareConfig and SoftwareApplier
 entities.
 This has some interesting implications for managing SoftwareConfig
 resources outside the context of the stack which uses them, but lets
 not
 worry too much about that *yet*.
 Sounds good. We are also defining some blueprints to break down the
 overall
 software config topic. We plan to share them later this week, and then
 we
 can consolidate with your plans and see how we can best join forces.


 At this point it would be very helpful to spec out how specific CM tools
 are invoked with given inputs, script, and CM tool specific options.
 That's our plan; and we would probably start with scripts and chef.

 Maybe if you start with shell scripts, cfn-init and chef then we can all
 contribute other CM tools like os-config-applier, puppet, ansible,
 saltstack.

 Hopefully by then my POC will at least be able to create resources, if
 not deliver some data to servers.
 We've been thinking about getting metadata to the in-instance parts on the
 server and whether the resources you are building can serve the purpose.
 I.e. pass and endpoint to the SoftwareConfig resources to the instance and
 let the instance query the metadata from the resource. Sounds like this is
 what you had in mind, so that would be a good point for integrating the
 work. In the meantime, we can think of some shortcuts.

 Note that os-collect-config is intended to be a light-weight generic
 in-instance agent to do exactly this. Watch for Metadata changes, and
 feed them to an underlying tool in a predictable interface. I'd hope
 that any of the appliers would mostly just configure os-collect-config
 to run a wrapper that speaks os-collect-config's interface.

 The interface is defined in the README:

 https://pypi.python.org/pypi/os-collect-config

 It is inevitable that we will extend os-collect-config to be able to
 collect config data from whatever API these config applier resources
 make available. I would suggest then that we not all go off and reinvent
 os-collect-config for each applier, but rather enhance os-collect-config
 as needed and write wrappers for the other config tools which implement
 its interface.

 os-apply-config already understands this interface for obvious reasons.

 Bash scripts can use os-apply-config to extract individual values, as
 you might see in some of the os-refresh-config scripts that are run as
 part of tripleo. I don't think anything further is really needed there.

 For chef, some kind of ohai plugin to read os-collect-config's collected
 data would make sense.

I'd definitely start with occ as Clint outlines. It would be nice if occ
only had to be configured to poll metadata for the OS::Nova::Server to
fetch the aggregated data for the currently available SoftwareAppliers.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 20.11.2013 09:51:34:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 20.11.2013 09:55
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/20/2013 09:29 PM, Clint Byrum wrote:
  Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
  Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 19.11.2013 21:43
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
  snip
  I think there needs to a CM tool specific agent delivered to the
server
  which os-collect-config invokes. This agent will transform the config
  data (input values, CM script, CM specific specialness) to a CM tool
  invocation.
 
  How to define and deliver this agent is the challenge. Some options
are:
  1) install it as part of the image customization/bootstrapping
(golden
  images or cloud-init)
  2) define a (mustache?) template in the SoftwareConfig which
  os-collect-config transforms into the agent script, which
  os-collect-config then executes
  3) a CM tool specific implementation of SoftwareApplier builds and
  delivers a complete agent to os-collect-config which executes it
 
  I may be leaning towards 3) at the moment. Hopefully any agent can be
  generated with a sufficiently sophisticated base SoftwareApplier
type,
  plus maybe some richer intrinsic functions.
  This is good summary of options; about the same we had in mind. And we
were
  also leaning towards 3. Probably the approach we would take is to get
a
  SoftwareApplier running for one CM tool (e.g. Chef), then look at
another
  tool (base shell scripts), and then see what the generic parts art
that can
  be factored into a base class.
 
  The POC I'm working on is actually backed by a REST API which does
  dumb
  (but structured) storage of SoftwareConfig and SoftwareApplier
  entities.
  This has some interesting implications for managing SoftwareConfig
  resources outside the context of the stack which uses them, but
lets
  not
  worry too much about that *yet*.
  Sounds good. We are also defining some blueprints to break down the
  overall
  software config topic. We plan to share them later this week, and
then
  we
  can consolidate with your plans and see how we can best join forces.
 
 
  At this point it would be very helpful to spec out how specific CM
tools
  are invoked with given inputs, script, and CM tool specific options.
  That's our plan; and we would probably start with scripts and chef.
 
  Maybe if you start with shell scripts, cfn-init and chef then we can
all
  contribute other CM tools like os-config-applier, puppet, ansible,
  saltstack.
 
  Hopefully by then my POC will at least be able to create resources,
if
  not deliver some data to servers.
  We've been thinking about getting metadata to the in-instance parts on
the
  server and whether the resources you are building can serve the
purpose.
  I.e. pass and endpoint to the SoftwareConfig resources to the instance
and
  let the instance query the metadata from the resource. Sounds like
this is
  what you had in mind, so that would be a good point for integrating
the
  work. In the meantime, we can think of some shortcuts.
 
  Note that os-collect-config is intended to be a light-weight generic
  in-instance agent to do exactly this. Watch for Metadata changes, and
  feed them to an underlying tool in a predictable interface. I'd hope
  that any of the appliers would mostly just configure os-collect-config
  to run a wrapper that speaks os-collect-config's interface.
 
  The interface is defined in the README:
 
  https://pypi.python.org/pypi/os-collect-config
 
  It is inevitable that we will extend os-collect-config to be able to
  collect config data from whatever API these config applier resources
  make available. I would suggest then that we not all go off and
reinvent
  os-collect-config for each applier, but rather enhance
os-collect-config
  as needed and write wrappers for the other config tools which implement
  its interface.
 
  os-apply-config already understands this interface for obvious reasons.
 
  Bash scripts can use os-apply-config to extract individual values, as
  you might see in some of the os-refresh-config scripts that are run as
  part of tripleo. I don't think anything further is really needed there.
 
  For chef, some kind of ohai plugin to read os-collect-config's
collected
  data would make sense.

Thanks for all that information, Clint. Fully agree that we should leverage
what is already there instead of re-inventing the wheel.

 
 I'd definitely start with occ as Clint outlines. It would be nice if occ
 only had to be configured to poll metadata for the OS::Nova::Server to
 fetch the aggregated data for the currently available SoftwareAppliers.

Yep, sounds like a plan

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Mike Spreitzer
Steve Baker sba...@redhat.com wrote on 11/19/2013 03:40:54 PM:
...
 How to define and deliver this agent is the challenge. Some options are:
 1) install it as part of the image customization/bootstrapping (golden
 images or cloud-init)

 2) define a (mustache?) template in the SoftwareConfig which
 os-collect-config transforms into the agent script, which
 os-collect-config then executes

I do not follow what you mean here.  Can you please elaborate a little?

 3) a CM tool specific implementation of SoftwareApplier builds and
 delivers a complete agent to os-collect-config which executes it

Could instead the agent be a package that the applier templates asks to be 
installed?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org, 
 Date: 11/19/2013 04:30 PM
 Subject: Re: [openstack-dev] [Heat] HOT software configuration 
 refined after design summit discussions
 
 Excerpts from Steve Baker's message of 2013-11-19 13:06:21 -0800:
  On 11/20/2013 09:50 AM, Clint Byrum wrote:
   Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
   Regarding apply_config/remove_config, if a SoftwareApplier resource 
is
   deleted it should trigger any remove_config and wait for the server 
to
   acknowledge when that is complete. This allows for any
   evacuation/deregistering workloads to be executed.
  
   I'm a little worried about the road that leads us down. Most 
configuration
   software defines forward progress only. Meaning, if you want 
something
   not there, you don't remove it from your assertions, you assert that 
it
   is not there.

I am worried too.  But I do not entirely follow your reasoning.  When I 
UPDATE a stack with a new template, am I supposed to write in that 
template not just what I want the stack to be but also how that differs 
from what it currently is?  That is not REST.  Not that I am a total REST 
zealot, but I am a fan of managing in terms of desired state.  But I agree 
there is a conflict between defining a 'remove' operation and the forward 
progress only mindset of most config tooling.

   ...
  A specific use-case I'm trying to address here is tripleo doing an
  update-replace on a nova compute node. The remove_config contains the
  workload to evacuate VMs and signal heat when the node is ready to be
  shut down. This is more involved than just uninstall the things.
  
  Could you outline in some more detail how you think this could be 
done?
  
 
 So for that we would not remove the software configuration for the
 nova-compute, we would assert that the machine needs vms evacuated.
 We want evacuation to be something we explicitly do, not a side effect
 of deleting things.

Really?  You want to force the user to explicitly say evacuate the VMs 
in all the various ways a host deletion can happen?  E.g., when an 
autoscaling group of hosts shrinks?

 Perhaps having delete hooks for starting delete
 work-flows is right, but it set off a red flag for me so I want to make
 sure we think it through.
 
 Also IIRC, evacuation is not necessarily an in-instance thing. It looks
 more like the weird thing we've been talking about lately which is
 how do we orchestrate tenant API's:
 
 https://etherpad.openstack.org/p/orchestrate-tenant-apis

This looks promising to me.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Steve Baker
On 11/21/2013 10:46 AM, Mike Spreitzer wrote:
 Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org,
  Date: 11/19/2013 04:30 PM
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
  Excerpts from Steve Baker's message of 2013-11-19 13:06:21 -0800:
   On 11/20/2013 09:50 AM, Clint Byrum wrote:
Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
Regarding apply_config/remove_config, if a SoftwareApplier
 resource is
deleted it should trigger any remove_config and wait for the
 server to
acknowledge when that is complete. This allows for any
evacuation/deregistering workloads to be executed.
   
I'm a little worried about the road that leads us down. Most
 configuration
software defines forward progress only. Meaning, if you want
 something
not there, you don't remove it from your assertions, you assert
 that it
is not there.

 I am worried too.  But I do not entirely follow your reasoning.  When
 I UPDATE a stack with a new template, am I supposed to write in that
 template not just what I want the stack to be but also how that
 differs from what it currently is?  That is not REST.  Not that I am a
 total REST zealot, but I am a fan of managing in terms of desired
 state.  But I agree there is a conflict between defining a 'remove'
 operation and the forward progress only mindset of most config tooling.

As I'm currently proposing, here are some stack update scenarios:
* update results in modified software config, apply_config will be
executed again on the affected server
* update results in a server that requires replacement. This results in:
  * execute the remove_config workload on that server. The
SoftwareApplier resource remains in DELETE_IN_PROGRESS until signalled
that remove_config is complete
  * delete the server
  * create the replacement server
  * execute apply_config on that server...
...
   A specific use-case I'm trying to address here is tripleo doing an
   update-replace on a nova compute node. The remove_config contains the
   workload to evacuate VMs and signal heat when the node is ready to be
   shut down. This is more involved than just uninstall the things.
  
   Could you outline in some more detail how you think this could be
 done?
  
 
  So for that we would not remove the software configuration for the
  nova-compute, we would assert that the machine needs vms evacuated.
  We want evacuation to be something we explicitly do, not a side effect
  of deleting things.

 Really?  You want to force the user to explicitly say evacuate the
 VMs in all the various ways a host deletion can happen?  E.g., when
 an autoscaling group of hosts shrinks?

Nobody is being forced. remove_config is entirely optional and only
exists for the more complex scenarios requiring evacuation/deregistering.

If remove_config is not specified, the SoftwareApplier should probably
go straight to DELETE_COMPLETE without waiting for any signal.

  Perhaps having delete hooks for starting delete
  work-flows is right, but it set off a red flag for me so I want to make
  sure we think it through.
 
  Also IIRC, evacuation is not necessarily an in-instance thing. It looks
  more like the weird thing we've been talking about lately which is
  how do we orchestrate tenant API's:
 
  https://etherpad.openstack.org/p/orchestrate-tenant-apis

 This looks promising to me.
It looks like these might be represented as SoftwareConfigs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
 Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org, 
  Date: 11/19/2013 04:30 PM
  Subject: Re: [openstack-dev] [Heat] HOT software configuration 
  refined after design summit discussions
  
  Excerpts from Steve Baker's message of 2013-11-19 13:06:21 -0800:
   On 11/20/2013 09:50 AM, Clint Byrum wrote:
Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
Regarding apply_config/remove_config, if a SoftwareApplier resource 
 is
deleted it should trigger any remove_config and wait for the server 
 to
acknowledge when that is complete. This allows for any
evacuation/deregistering workloads to be executed.
   
I'm a little worried about the road that leads us down. Most 
 configuration
software defines forward progress only. Meaning, if you want 
 something
not there, you don't remove it from your assertions, you assert that 
 it
is not there.
 
 I am worried too.  But I do not entirely follow your reasoning.  When I 
 UPDATE a stack with a new template, am I supposed to write in that 
 template not just what I want the stack to be but also how that differs 
 from what it currently is?  That is not REST.  Not that I am a total REST 
 zealot, but I am a fan of managing in terms of desired state.  But I agree 
 there is a conflict between defining a 'remove' operation and the forward 
 progress only mindset of most config tooling.
 

I am worried about the explosion of possibilities that comes from trying
to deal with all of the diff's possible inside an instance. If there is an
actual REST interface for a thing, then yes, let's use that. For instance,
if we are using docker, there is in fact a very straight forward way to
say remove entity X. If we are using packages we have the same thing.
However, if we are just trying to write chef configurations, we have to
write reverse chef configurations.

What I meant to convey is let's give this piece of the interface a lot of
thought. Not this is wrong to even have. Given a couple of days now,
I think we do need apply and remove. We should also provide really
solid example templates for this concept.

...
   A specific use-case I'm trying to address here is tripleo doing an
   update-replace on a nova compute node. The remove_config contains the
   workload to evacuate VMs and signal heat when the node is ready to be
   shut down. This is more involved than just uninstall the things.
   
   Could you outline in some more detail how you think this could be 
 done?
   
  
  So for that we would not remove the software configuration for the
  nova-compute, we would assert that the machine needs vms evacuated.
  We want evacuation to be something we explicitly do, not a side effect
  of deleting things.
 
 Really?  You want to force the user to explicitly say evacuate the VMs 
 in all the various ways a host deletion can happen?  E.g., when an 
 autoscaling group of hosts shrinks?
 

Autoscaling doesn't really fly with stateful services IMO. Also for
TripleO's use case, auto-scaling is not really a high priority. Hardware
isn't nearly as easily allocatable as VM's.

Anyway, there is a really complicated work-flow for decomissioning
any stateful service, and it differs wildly between them. I do want to
have a place to define that work-flow and reliably trigger it when it
needs to be triggered. I do not want it to _only_ be available in the
delete this resource case, and I also do not want it to _always_
be run in that case, as I may legitimately be destroying the data too.
I need a way to express that intention, and in my mind, the way to do
that is to first complete an evacuation and then delete the thing.

Better ideas are _most_ welcome.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Steve Baker
On 11/21/2013 11:41 AM, Clint Byrum wrote:
 Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
 Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org, 
 Date: 11/19/2013 04:30 PM
 Subject: Re: [openstack-dev] [Heat] HOT software configuration 
 refined after design summit discussions

 Excerpts from Steve Baker's message of 2013-11-19 13:06:21 -0800:
 On 11/20/2013 09:50 AM, Clint Byrum wrote:
 Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
 Regarding apply_config/remove_config, if a SoftwareApplier resource 
 is
 deleted it should trigger any remove_config and wait for the server 
 to
 acknowledge when that is complete. This allows for any
 evacuation/deregistering workloads to be executed.

 I'm a little worried about the road that leads us down. Most 
 configuration
 software defines forward progress only. Meaning, if you want 
 something
 not there, you don't remove it from your assertions, you assert that 
 it
 is not there.
 I am worried too.  But I do not entirely follow your reasoning.  When I 
 UPDATE a stack with a new template, am I supposed to write in that 
 template not just what I want the stack to be but also how that differs 
 from what it currently is?  That is not REST.  Not that I am a total REST 
 zealot, but I am a fan of managing in terms of desired state.  But I agree 
 there is a conflict between defining a 'remove' operation and the forward 
 progress only mindset of most config tooling.

 I am worried about the explosion of possibilities that comes from trying
 to deal with all of the diff's possible inside an instance. If there is an
 actual REST interface for a thing, then yes, let's use that. For instance,
 if we are using docker, there is in fact a very straight forward way to
 say remove entity X. If we are using packages we have the same thing.
 However, if we are just trying to write chef configurations, we have to
 write reverse chef configurations.

 What I meant to convey is let's give this piece of the interface a lot of
 thought. Not this is wrong to even have. Given a couple of days now,
 I think we do need apply and remove. We should also provide really
 solid example templates for this concept.
You're right, I'm already starting to see issues with my current approach.

This smells like a new blueprint. I'll remove it from the scope of the
current software config work and raise a blueprint to track remove-config.
 ...
 A specific use-case I'm trying to address here is tripleo doing an
 update-replace on a nova compute node. The remove_config contains the
 workload to evacuate VMs and signal heat when the node is ready to be
 shut down. This is more involved than just uninstall the things.

 Could you outline in some more detail how you think this could be 
 done?
 So for that we would not remove the software configuration for the
 nova-compute, we would assert that the machine needs vms evacuated.
 We want evacuation to be something we explicitly do, not a side effect
 of deleting things.
 Really?  You want to force the user to explicitly say evacuate the VMs 
 in all the various ways a host deletion can happen?  E.g., when an 
 autoscaling group of hosts shrinks?

 Autoscaling doesn't really fly with stateful services IMO. Also for
 TripleO's use case, auto-scaling is not really a high priority. Hardware
 isn't nearly as easily allocatable as VM's.

 Anyway, there is a really complicated work-flow for decomissioning
 any stateful service, and it differs wildly between them. I do want to
 have a place to define that work-flow and reliably trigger it when it
 needs to be triggered. I do not want it to _only_ be available in the
 delete this resource case, and I also do not want it to _always_
 be run in that case, as I may legitimately be destroying the data too.
 I need a way to express that intention, and in my mind, the way to do
 that is to first complete an evacuation and then delete the thing.

 Better ideas are _most_ welcome.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Mike Spreitzer
Regarding my previous email:

 Steve Baker sba...@redhat.com wrote on 11/19/2013 03:40:54 PM:
 ...
  How to define and deliver this agent is the challenge. Some options 
are:
  1) install it as part of the image customization/bootstrapping (golden
  images or cloud-init) 
 
  2) define a (mustache?) template in the SoftwareConfig which
  os-collect-config transforms into the agent script, which
  os-collect-config then executes 
 
 I do not follow what you mean here.  Can you please elaborate a little? 
 
  3) a CM tool specific implementation of SoftwareApplier builds and
  delivers a complete agent to os-collect-config which executes it
 
 Could instead the agent be a package that the applier templates asks
 to be installed? 

We may not need to active agent to be CM tool specific.  I think the 
remarks about os-collect-config show this.  Maybe all we need is a CM tool 
specific package to be installed to supply the (I use the singular because 
of our earlier conversation that affirmed the expectation of only one CM 
tool per VM) hook that os-collect-config calls.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-11-20 15:16:45 -0800:
 Clint Byrum cl...@fewbar.com wrote on 11/20/2013 05:41:16 PM:
 
  Autoscaling doesn't really fly with stateful services IMO.
 
 I presume you're concerned about the auto part, not the scaling.  Even 
 a stateful group is something you may want to scale; it just takes a more 
 complex set of operations to accomplish that.  If we can make a Heat 
 autoscaling group invoke the right set of operations, why not?
 

It is most definitely possible and necessary. We _must_ do this.

It does not fly with today's limited auto-scaling.

If we can get the link between orchestration and work-flow working well,
the end result should be harmonious automation and then automatic scaling
will indeed be possible for stateful services.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 00:04
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 11:41 AM, Clint Byrum wrote:
  Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
  Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:

snip

 
  I am worried about the explosion of possibilities that comes from
trying
  to deal with all of the diff's possible inside an instance. If there is
an
  actual REST interface for a thing, then yes, let's use that. For
instance,
  if we are using docker, there is in fact a very straight forward way to
  say remove entity X. If we are using packages we have the same thing.
  However, if we are just trying to write chef configurations, we have to
  write reverse chef configurations.
 
  What I meant to convey is let's give this piece of the interface a lot
of
  thought. Not this is wrong to even have. Given a couple of days now,
  I think we do need apply and remove. We should also provide really
  solid example templates for this concept.
 You're right, I'm already starting to see issues with my current
approach.

 This smells like a new blueprint. I'll remove it from the scope of the
 current software config work and raise a blueprint to track
remove-config.

So I read thru those recent discussions and in parallel also started to
update the design wiki. BTW, nanjj renamed the wiki to [1] (but also made a
redirect from the previous ...-WIP page) and linked it as spec to BP [2].

I'll leave out the remove-config thing for now. While thinking about the
overall picture, I came up with some other comments:

I thought about the name SoftwareApplier some more and while it is clear
what it does (it applies a software config to a server), the naming is not
really consistent with all the other resources in Heat. Every other
resource type is called after the thing that you get when the template gets
instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
case of SoftwareApplier what you actually get from a user perspective is a
deployed instance of the piece of software described be a SoftwareConfig.
Therefore, I was calling it SoftwareDeployment orignally, because you get a
software deployment (according to a config). Any comments on that name?

If we think this thru with respect to remove-config (even though this
needs more thought), a SoftwareApplier (that thing itself) would not really
go to state DELETE_IN_PROGRESS during an update. It is always there on the
VM but the software it deploys gets deleted and then reapplied or
whatever ...

Now thinking more about update scenarios (which we can leave for an
iteration after the initial deployment is working), in my mental model it
would be more consistent to have information for handle_create,
handle_delete, handle_update kinds of events all defined in the
SoftwareConfig resource. SoftwareConfig for represent configuration
information for one specific piece of software, e.g. a web server. So it
could provide all the information you need to install it, to uninstall it,
or to update its config. By updating the SoftwareApplier's (or
SoftwareDeployment's - my preferred name) state at runtime, the in-instance
tools would grab the respective script of whatever an run it.

So SoftwareConfig could look like:

resources:
  my_webserver_config:
type: OS::Heat::SoftwareConfig
properties:
  http_port:
type: number
  # some more config props

  config_create: http://www.example.com/my_scripts/webserver/install.sh
  config_delete:
http://www.example.com/my_scripts/webserver/uninstall.sh
  config_update:
http://www.example.com/my_scripts/webserver/applyupdate.sh


At runtime, when a SoftwareApplier gets created, it looks for the
'config_create' hook and triggers that automation. When it gets deleted, it
looks for the 'config_delete' hook and so on. Only config_create is
mandatory.
I think that would also give us nice extensibility for future use cases.
For example, Heat today does not support something like stop-stack or
start-stack which would be pretty useful though. If we have it one day, we
would just add a 'config_start' hook to the SoftwareConfig.


[1]
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
[2] https://blueprints.launchpad.net/heat/+spec/hot-software-config

  ...
  A specific use-case I'm trying to address here is tripleo doing an
  update-replace on a nova compute node. The remove_config contains
the
  workload to evacuate VMs and signal heat when the node is ready to
be
  shut down. This is more involved than just uninstall the things.
 
  Could you outline in some more detail how you think this could be
  done?
  So for that we would not remove the software configuration for the
  nova-compute, we would assert that the machine needs vms

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-19 Thread Steve Baker
On 11/19/2013 08:37 PM, Thomas Spatzier wrote:
 Steve Baker sba...@redhat.com wrote on 18.11.2013 21:52:04:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 18.11.2013 21:54
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/19/2013 02:22 AM, Thomas Spatzier wrote:
 Hi all,

 I have reworked the wiki page [1] I created last week to reflect
 discussions we had on the mail list and in IRC. From ML discussions
 last
 week it looked like we were all basically on the same page (with some
 details to be worked out), and I hope the new draft eliminates some
 confusion that the original draft had.

 [1]
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 Thanks Thomas, this looks really good. I've actually started on a POC
 which maps to this model.
 Good to hear that, Steve :-)
 Now that we are converging, should we consolidate the various wiki pages
 and just have one? E.g. copy the complete contents of
 hot-software-config-WIP to your original hot-software-config, or deprecate
 all others and make hot-software-config-WIP the master?
Lets just bless hot-software-config-WIP and add to it as we flesh out
the implementation.

 I've used different semantics which you may actually prefer some of,
 please comment below.

 Resource types:
 SoftwareConfig - SoftwareConfig (yay!)
 SoftwareDeployment - SoftwareApplier - less typing, less mouth-fatigue
 I'm ok with SoftwareApplier. If we don't hear objections, I can change it
 in the wiki.

 SoftwareConfig properties:
 parameters - inputs - just because parameters is overloaded already.
 Makes sense.

 Although if the CM tool has their own semantics for inputs then that
 should be used in that SoftwareConfig resource implementation instead.
 outputs - outputs

 SoftwareApplier properties:
 software_config - apply_config - because there will sometimes be a
 corresponding remove_config
 Makes sense, and the remove_config thought is a very good point!

 server - server
 parameters - input_values - to match the 'inputs' schema property in
 SoftwareConfig
 Agree on input_values.

 Other comments on hot-software-config-WIP:

 Regarding apply_config/remove_config, if a SoftwareApplier resource is
 deleted it should trigger any remove_config and wait for the server to
 acknowledge when that is complete. This allows for any
 evacuation/deregistering workloads to be executed.

 I'm unclear yet what the SoftwareConfig 'role' is for, unless the role
 specifies the contract for a given inputs and outputs schema? How would
 this be documented or enforced? I'm inclined to leave it out for now.
 So about 'role', as I stated in the wiki, my thinking was that there will
 be different SoftwareConfig and SoftwareApplier implementations per CM tool
 (more on that below), since all CM tools will probably have their specific
 metadata and runtime implementation. So in my example I was using Chef, and
 'role' is just a Chef concept, i.e. you take a cookbook and configure a
 specific Chef role on a server.
OK, its Chef specific; I'm fine with that.
 It should be possible to write a SoftwareConfig type for a new CM tool
 as a provider template. This has some nice implications for deployers
 and users.
 I think provider templates are a good thing to have clean componentization
 for re-use. However, I think it still would be good to allow users to
 define their SoftwareConfigs inline in a template for simple use cases. I
 heard that requirement in several posts on the ML last week.
 The question is whether we can live with a single implementation of
 SoftwareConfig and SoftwareApplier then (see also below).
Yes, a provider template would encapsulate some base SoftwareConfig
resource type, but users would be free to use this type inline in their
template too.
 My hope is that there will not need to be a different SoftwareApplier
 type written for each CM tool. But maybe there will be one for each
 delivery mechanism. The first implementation will use metadata polling
 and signals, another might use Marconi. Bootstrapping an image to
 consume a given CM tool and applied configuration data is something that
 we need to do, but we can make it beyond the scope of this particular
 proposal.
 I was thinking about a single implementation, too. However, I cannot really
 imagine how one single implementation could handle both the different
 metadata of different CM tools, and different runtime implementation. I
 think we would want to support at least a handful of the most favorite
 tools, but cannot see at the moment how to cover them all in one
 implementation. My thought was that there could be a super-class for common
 behavior, and then plugins with specific behavior for each tool.

 Anyway, all of that needs to be verified, so working on PoC patches is
 definitely the right thing to do. For example, if we work on implementation
 for two CM tools (e.g. Chef and simple scripts), we can

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-19 Thread Steve Baker
On 11/20/2013 09:50 AM, Clint Byrum wrote:
 Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
 Regarding apply_config/remove_config, if a SoftwareApplier resource is
 deleted it should trigger any remove_config and wait for the server to
 acknowledge when that is complete. This allows for any
 evacuation/deregistering workloads to be executed.

 I'm a little worried about the road that leads us down. Most configuration
 software defines forward progress only. Meaning, if you want something
 not there, you don't remove it from your assertions, you assert that it
 is not there.

 The reason this is different than the way we operate with resources is
 that resources are all under Heat's direct control via well defined
 APIs. In-instance things, however, will be indirectly controlled. So I
 feel like focusing on a diff mechanism for user-deployed tools may be
 unnecessary and might confuse. I'd much rather have a converge
 mechanism for the users to focus on.


A specific use-case I'm trying to address here is tripleo doing an
update-replace on a nova compute node. The remove_config contains the
workload to evacuate VMs and signal heat when the node is ready to be
shut down. This is more involved than just uninstall the things.

Could you outline in some more detail how you think this could be done?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-19 Thread Clint Byrum
Excerpts from Steve Baker's message of 2013-11-19 13:06:21 -0800:
 On 11/20/2013 09:50 AM, Clint Byrum wrote:
  Excerpts from Steve Baker's message of 2013-11-18 12:52:04 -0800:
  Regarding apply_config/remove_config, if a SoftwareApplier resource is
  deleted it should trigger any remove_config and wait for the server to
  acknowledge when that is complete. This allows for any
  evacuation/deregistering workloads to be executed.
 
  I'm a little worried about the road that leads us down. Most configuration
  software defines forward progress only. Meaning, if you want something
  not there, you don't remove it from your assertions, you assert that it
  is not there.
 
  The reason this is different than the way we operate with resources is
  that resources are all under Heat's direct control via well defined
  APIs. In-instance things, however, will be indirectly controlled. So I
  feel like focusing on a diff mechanism for user-deployed tools may be
  unnecessary and might confuse. I'd much rather have a converge
  mechanism for the users to focus on.
 
 
 A specific use-case I'm trying to address here is tripleo doing an
 update-replace on a nova compute node. The remove_config contains the
 workload to evacuate VMs and signal heat when the node is ready to be
 shut down. This is more involved than just uninstall the things.
 
 Could you outline in some more detail how you think this could be done?
 

So for that we would not remove the software configuration for the
nova-compute, we would assert that the machine needs vms evacuated.
We want evacuation to be something we explicitly do, not a side effect
of deleting things. Perhaps having delete hooks for starting delete
work-flows is right, but it set off a red flag for me so I want to make
sure we think it through.

Also IIRC, evacuation is not necessarily an in-instance thing. It looks
more like the weird thing we've been talking about lately which is
how do we orchestrate tenant API's:

https://etherpad.openstack.org/p/orchestrate-tenant-apis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-18 Thread Thomas Spatzier
Hi all,

I have reworked the wiki page [1] I created last week to reflect
discussions we had on the mail list and in IRC. From ML discussions last
week it looked like we were all basically on the same page (with some
details to be worked out), and I hope the new draft eliminates some
confusion that the original draft had.

[1] https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP

Regards,
Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-18 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 18.11.2013 21:52:04:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 18.11.2013 21:54
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/19/2013 02:22 AM, Thomas Spatzier wrote:
  Hi all,
 
  I have reworked the wiki page [1] I created last week to reflect
  discussions we had on the mail list and in IRC. From ML discussions
last
  week it looked like we were all basically on the same page (with some
  details to be worked out), and I hope the new draft eliminates some
  confusion that the original draft had.
 
  [1]
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
 Thanks Thomas, this looks really good. I've actually started on a POC
 which maps to this model.

Good to hear that, Steve :-)
Now that we are converging, should we consolidate the various wiki pages
and just have one? E.g. copy the complete contents of
hot-software-config-WIP to your original hot-software-config, or deprecate
all others and make hot-software-config-WIP the master?


 I've used different semantics which you may actually prefer some of,
 please comment below.

 Resource types:
 SoftwareConfig - SoftwareConfig (yay!)
 SoftwareDeployment - SoftwareApplier - less typing, less mouth-fatigue

I'm ok with SoftwareApplier. If we don't hear objections, I can change it
in the wiki.


 SoftwareConfig properties:
 parameters - inputs - just because parameters is overloaded already.

Makes sense.

 Although if the CM tool has their own semantics for inputs then that
 should be used in that SoftwareConfig resource implementation instead.
 outputs - outputs

 SoftwareApplier properties:
 software_config - apply_config - because there will sometimes be a
 corresponding remove_config

Makes sense, and the remove_config thought is a very good point!

 server - server
 parameters - input_values - to match the 'inputs' schema property in
 SoftwareConfig

Agree on input_values.


 Other comments on hot-software-config-WIP:

 Regarding apply_config/remove_config, if a SoftwareApplier resource is
 deleted it should trigger any remove_config and wait for the server to
 acknowledge when that is complete. This allows for any
 evacuation/deregistering workloads to be executed.

 I'm unclear yet what the SoftwareConfig 'role' is for, unless the role
 specifies the contract for a given inputs and outputs schema? How would
 this be documented or enforced? I'm inclined to leave it out for now.

So about 'role', as I stated in the wiki, my thinking was that there will
be different SoftwareConfig and SoftwareApplier implementations per CM tool
(more on that below), since all CM tools will probably have their specific
metadata and runtime implementation. So in my example I was using Chef, and
'role' is just a Chef concept, i.e. you take a cookbook and configure a
specific Chef role on a server.


 It should be possible to write a SoftwareConfig type for a new CM tool
 as a provider template. This has some nice implications for deployers
 and users.

I think provider templates are a good thing to have clean componentization
for re-use. However, I think it still would be good to allow users to
define their SoftwareConfigs inline in a template for simple use cases. I
heard that requirement in several posts on the ML last week.
The question is whether we can live with a single implementation of
SoftwareConfig and SoftwareApplier then (see also below).


 My hope is that there will not need to be a different SoftwareApplier
 type written for each CM tool. But maybe there will be one for each
 delivery mechanism. The first implementation will use metadata polling
 and signals, another might use Marconi. Bootstrapping an image to
 consume a given CM tool and applied configuration data is something that
 we need to do, but we can make it beyond the scope of this particular
 proposal.

I was thinking about a single implementation, too. However, I cannot really
imagine how one single implementation could handle both the different
metadata of different CM tools, and different runtime implementation. I
think we would want to support at least a handful of the most favorite
tools, but cannot see at the moment how to cover them all in one
implementation. My thought was that there could be a super-class for common
behavior, and then plugins with specific behavior for each tool.

Anyway, all of that needs to be verified, so working on PoC patches is
definitely the right thing to do. For example, if we work on implementation
for two CM tools (e.g. Chef and simple scripts), we can probably see if one
common implementation is possible or not.
Someone from our team is going to write a provider for Chef to try things
out. I think that can be aligned nicely with your work.


 The POC I'm working on is actually backed by a REST API which does dumb
 (but structured) storage of SoftwareConfig and SoftwareApplier entities

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-14 Thread Mike Spreitzer
It seems to me we have been discussing a proposal whose write-up 
intertwines two ideas: (1) making software components look like resources, 
and (2) using nested stacks and environments to achieve the pattern of 
definitions and uses.  The ideas are separable, and I think the discussion 
has sort of revealed that.  There are issues with both.  Regarding the 
first, I think Clint has been pursuing the critical issue.

Clint Byrum cl...@fewbar.com wrote on 11/12/2013 07:34:34 PM:
 ...
  I have implementation questions about both of these approaches 
though,
  as it appears they'd have to reach backward in the graph to insert
  their configuration, or have a generic bucket for all configuration
  
  Yeah, it does depend on the implementation. If we use Mistral the
  agent will need to ask Mistral for the tasks that apply to the server.
  
  $ mistral task-consume \
--tags=instance_id=$(my_instance_id);stack_id=$(stack_id)
  
 
 Actually that makes perfect sense. Thanks. If we have a hidden work-flow
 handle for any resources that have need of it passed in much the same
 way we pass in the cfn-metadata-server now, that would allow us to write
 our work-flow afterward. The image can decide when it wants to start
 doing work-flows and can decide to just spin until the work-flow exists
 and is enabled. For the tiny-work-flow case of one task it works the
 same as the complicated work-flow, so I think this sounds like a
 workable plan.. assuming a work-flow service. :)

If I understand correctly, Clint is dismayed at the idea of a heat engine 
plugin reaching backward to find the software configs needed to apply to 
a server, but is OK with a task in a taskflow doing that.  It seems like a 
double standard to me.  And it makes the software config stuff depend on 
an essentially independent big change, which is dismaying to those of us 
eager to make progress on software config.  How about allowing a Heat 
plugin to query for dependents in the graph?

Regarding issue (2), look at where we are going.  If I write a template 
that I want to share with you, and that template exercises the 
definition/use distinction, I have to give you: (i) the templates that 
comprise my definitions, (ii) my template that has my uses, and (iii) a 
*fragment* of environment that binds the templates in (i) to the names 
used for them in (ii).  This is a pretty ragged assembly of stuff.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Excerpts form Clint Byrum's message on 12.11.2013 19:32:50:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 12.11.2013 19:35
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
snip
 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a

IMO the current proposal does _not_ the concrete hosting inside component
definition. The component definition is in this external template file and
all we do is give it a pointer to the server at deploy time so that the
implementation can perform whatever is needed to perform at that time.
The resource in the actual template file is like the intermediate
association resource you are suggesting below (similar to what
VolumeAttachment does), so this is the place where you say which component
gets deployed where. This represents a concrete use of a software
component. Again, all we do is pass in a pointer to the server where _this
use_ of the software component shall be installed.

 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models

The current proposal was done completely unrelated to TOSCA, but really
just a try to have a pragmatic approach for solving the use cases we talked
about. I don't really care in which directions the relations point. Both
ways can be easily mapped to TOSCA. I just think the current proposal is
intuitive, at least to me. And you could see it as kind of a short notation
that avoids another association class.

 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
 type: OS::Heat::ChefCookbook
 properties:
   cookbook_url: https://some.test/foo
   parameters:
 endpoint_host:
   type: string
   WebServer:
 type: OS::Nova::Server
 properties:
   image: webserver
   flavor: 100
   DeployWebConfig:
 type: OS::Heat::ConfigDeployer
 properties:
   configuration: {get_resource: WebConfig}
   on_server: {get_resource: WebServer}
   parameters:
 endpoint_host: {get_attribute: [ WebServer, first_ip]}

The DeployWebConfig association class actually is the 'mysql' resource in
the template on the wiki page. See the Design alternative section I put it.
That would be fine with me as well.


snip


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 12.11.2013
21:27:13:
 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 12.11.2013 21:29
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Hi,

 I agree with Clint that component placement specified inside
 component configuration is not a right thing. I remember that mostly
 everyone agreed that hosted_on should not be in HOT templates.
 When one specify placement explicitly inside  a component definition
 it prevents the following:
 1. Reusability - you can't reuse component without creating its
 definition copy with another placement parameter.

See my reply to Clint's mail. The deployment location in form of the
server reference is _not_ hardcoded in the component definition. All we
do is to provide a pointer to the server where a software shall be deployed
at deploy time. You can use a component definition in many place, and in
each place where you use it you provide it a pointer to the target server.

 2. Composability - it will be no clear way to express composable
 configurations. There was a clear way in a template showed during
 design session where server had a list of components to be placed.

I think we have full composability with the deployment resources that
mark uses of software component definitions.

 3. Deployment order - some components should be placed in strict
 order and it will be much easier just make an ordered list of
 components then express artificial dependencies between them just
 for ordering.

With the deployment resources and Heat's normal way of handling
dependencies between any resources, we should be able have proper ordering.
I agree that strict ordering is probably the most easy way of doing it, but
we have implementations that do deployment in a more flexible manner
without any problems.


 Thanks
 Georgy


 On Tue, Nov 12, 2013 at 10:32 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
  for HOT software configuration based on discussions at the design
summit
  last week. Angus also put a sample up in an etherpad last week, but we
did
  not have enough time to go thru it in the design session. My write-up
is
  based on Angus' sample, actually a refinement, and on discussions we
had in
  breaks, plus it is trying to reflect all the good input from ML
discussions
  and Steve Baker's initial proposal.
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a
 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models
 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
     type: OS::Heat::ChefCookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoint_host:
           type: string
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
   DeployWebConfig:
     type: OS::Heat::ConfigDeployer
     properties:
       configuration: {get_resource: WebConfig}
       on_server: {get_resource: WebServer}
       parameters:
         endpoint_host: {get_attribute: [ WebServer, first_ip]}

 I have implementation questions about both of these approaches though,
 as it appears they'd have to reach backward in the graph to insert
 their configuration, or have a generic bucket for all configuration
 to be inserted. IMO that would look a lot like the method I proposed,
 which was to just have a list of components attached directly to the
 server like this:

 components:
   WebConfig:
     type: Chef::Cookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoing_host:
           type: string
 resources:
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
     components:
       - webconfig:
         component: {get_component: WebConfig}
         parameters:
           endpoint_host: {get_attribute: [ WebServer, first_ip ]}

 Of course

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-11-13 00:28:59 -0800:
 Angus Salkeld asalk...@redhat.com wrote on 13.11.2013 00:22:44:
  From: Angus Salkeld asalk...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 13.11.2013 00:25
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
  On 12/11/13 10:32 -0800, Clint Byrum wrote:
  Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
  
   Hi all,
  
   I have just posted the following wiki page to reflect a refined
 proposal
   for HOT software configuration based on discussions at the design
 summit
   last week. Angus also put a sample up in an etherpad last week, but we
 did
   not have enough time to go thru it in the design session. My write-up
 is
   based on Angus' sample, actually a refinement, and on discussionswe
 had in
   breaks, plus it is trying to reflect all the good input from ML
 discussions
   and Steve Baker's initial proposal.
  
  
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
  
   Please review and provide feedback.
  
  Hi Thomas, thanks for spelling this out clearly.
  
  I am still -1 on anything that specifies the place a configuration is
  hosted inside the configuration definition itself. Because
 configurations
  are encapsulated by servers, it makes more sense to me that the servers
  (or server groups) would specify their configurations. If changing to a
  more logical model is just too hard for TOSCA to adapt to, then I
 suggest
  this be an area that TOSCA differs from Heat. We don't need two models
  for communicating configurations to servers, and I'd prefer Heat stay
  focused on making HOT template authors' and users' lives better.
  
  I have seen an alternative approach which separates a configuration
  definition from a configuration deployer. This at least makes it clear
  that the configuration is a part of a server. In pseudo-HOT:
  
  resources:
WebConfig:
  type: OS::Heat::ChefCookbook
  properties:
cookbook_url: https://some.test/foo
parameters:
  endpoint_host:
type: string
WebServer:
  type: OS::Nova::Server
  properties:
image: webserver
flavor: 100
DeployWebConfig:
  type: OS::Heat::ConfigDeployer
  properties:
configuration: {get_resource: WebConfig}
on_server: {get_resource: WebServer}
parameters:
  endpoint_host: {get_attribute: [ WebServer, first_ip]}
 
 
  This is what Thomas defined, with one optimisation.
  - The webconfig is a yaml template.
 
  As you say the component is static - if so why even put it inline in
  the template (well that was my thinking, it seems like a template not
  really a resource).
 
 Yes, exactly. Our idea was to put it in its own file since it is really
 static and having it in its own file makes it much more reusable.
 With 'WebConfig' defined inline in the template as in the snippet above,
 you will have to update many template files where you use the component,
 whereas you will only have to touch one place when it is in its own file.
 Ok, the example above looks simple, but in reality we will see more complex
 sets of parameters etc.
 Maybe for very simple use cases, we can allow a shortcut of inlining it in
 the template (I mentioned this in the wiki) and avoid the need for a
 separate file.
 

I think I understand now, and we're all basically on the same page. As
usual, I was confused by the subtleties.

I think the in-line capability is critical to have in the near-term
plan, but would +2 an implementation that left it out at the beginning.

Before we ratify this and people run off and write code, I'd like to
present my problems in TripleO and try to see if I can express them
using the spec you've laid out. Will try and do that in the next couple
of days.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 13.11.2013 18:11:18:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 13.11.2013 18:14
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/11/13 17:57, Thomas Spatzier wrote:
snip
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 I believe there's an error in the Explicit dependency section, where it
 says that depends_on is a property. In cfn DependsOn actually exists at
 the same level as Type, Properties, c.

 resources:
client:
  type: My::Software::SomeClient
  properties:
server: { get_resource: my_server }
params:
  # params ...
  depends_on:
- get_resource: server_process1
- get_resource: server_process2

Good point. I think the reason was tied too much to the provider template
concept where all properties get passed automatically to the provider
template and in there you can basically do anything that is necessary,
including hanlding dependencies. But I was missing the fact that this is a
generic concept for all resources.
I'll fix it in the wiki.


 And conceptually this seems correct, because it applies to any kind of
 resource, whereas properties are defined per-resource-type.

 Don't be fooled by our implementation:
 https://review.openstack.org/#/c/44733/

 It also doesn't support a list, but I think we can and should fix that
 in HOT.

Doesn't DependsOn already support lists? I quickly checked the code and it
seems it does:
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L288


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Zane Bitter

On 13/11/13 18:29, Thomas Spatzier wrote:

It also doesn't support a list, but I think we can and should fix that
in HOT.

Doesn't DependsOn already support lists? I quickly checked the code and it
seems it does:
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L288


Oh, cool. Looks like Angus added that last month. Thanks, I missed that 
one :)


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Steve Baker
On 11/14/2013 06:11 AM, Zane Bitter wrote:
 On 11/11/13 17:57, Thomas Spatzier wrote:

 Hi all,

 I have just posted the following wiki page to reflect a refined proposal
 for HOT software configuration based on discussions at the design summit
 last week. Angus also put a sample up in an etherpad last week, but
 we did
 not have enough time to go thru it in the design session. My write-up is
 based on Angus' sample, actually a refinement, and on discussions we
 had in
 breaks, plus it is trying to reflect all the good input from ML
 discussions
 and Steve Baker's initial proposal.

 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP

 Please review and provide feedback.

 I believe there's an error in the Explicit dependency section, where
 it says that depends_on is a property. In cfn DependsOn actually
 exists at the same level as Type, Properties, c.

 resources:
   client:
 type: My::Software::SomeClient
 properties:
   server: { get_resource: my_server }
   params:
 # params ...
 depends_on:
   - get_resource: server_process1
   - get_resource: server_process2

 And conceptually this seems correct, because it applies to any kind of
 resource, whereas properties are defined per-resource-type.

 Don't be fooled by our implementation:
 https://review.openstack.org/#/c/44733/

 It also doesn't support a list, but I think we can and should fix that
 in HOT.
This has already been fixed
https://review.openstack.org/#/c/51507/1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Steve Baker
On 11/14/2013 06:02 AM, Zane Bitter wrote:
 On 13/11/13 01:34, Clint Byrum wrote:
 Excerpts from Angus Salkeld's message of 2013-11-12 15:22:44 -0800:
 IMO is should just be a template/formatted file.
 
 I'd prefer that we have the ability to pull in a chunk of in-line
 template
 as well. Perhaps that is the template resource, I have not thought that
 through. It is not o-k, IMO, to push things off entirely to external
 files/urls/providers, etc. That is just cumbersome and unnecessary for
 a common case which is to deploy two things using the same base config
 with parameters having different values.

 Of course, for my use case of having different topologies reusing bits
 of config, it is perfect to have the reusable bits split into different
 files.

 So, if I understand Angus's get_file suggestion correctly, it parses
 out to the equivalent of inlining the file's contents. So if you
 implement the resource as accepting inline data and add in get_file,
 then you get:
   a) Composability, OR
   b) Everything in one file

 but not both. I think that is probably sufficient, but I would be
 interested in your opinion: is it essential that you be able to
 compose software components defined in the same file?

 Note that the implementation of get_file would also involve
 python-heatclient automagically detecting it and making sure the
 relevant file is uploaded in the files section. So this shouldn't
 create a lot of mental overhead for the user.

 (BTW I think I like this plan.)

Yes, and get_file is a HOT function which is only evaluated where other
functions are evaluated, which is probably a good thing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-12 Thread Zane Bitter

On 12/11/13 14:59, Alex Heneveld wrote:

One minor suggestion is to consider using a special character (eg $)
rather than reserved keywords.  As I understand it the keywords are only
interpreted when they exactly match the value of a key in a map, so it
is already unlikely to be problematic.  However I think it would be more
familiar and clear if we instead used the rule that any item (key or
value) which _starts_ with a $ is interpreted specially.  What those
rules are is TBD but you could for instance write functions -- as either
`$get_param('xxx')` or `$get_param: xxx` -- as well as allow accessing a
parameter directly `$xxx `.


This sounds like a nice idea on the surface. AWS accomplished the same 
thing by namespacing functions with the Fn:: prefix (except for 'Ref', 
bizarrely), and it works fine because the chances are if you randomly 
(maybe in a Metadata section) have a dict key that happens to start with 
Fn:: then you can probably just choose a different name. However, if 
for any reason you have a dict key starting with $ and we interpret 
that specially, then you are basically hosed since you almost certainly 
_needed_ it to actually start with $ for a reason. So -1.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-12 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
 Hi all,
 
 I have just posted the following wiki page to reflect a refined proposal
 for HOT software configuration based on discussions at the design summit
 last week. Angus also put a sample up in an etherpad last week, but we did
 not have enough time to go thru it in the design session. My write-up is
 based on Angus' sample, actually a refinement, and on discussions we had in
 breaks, plus it is trying to reflect all the good input from ML discussions
 and Steve Baker's initial proposal.
 
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
 Please review and provide feedback.

Hi Thomas, thanks for spelling this out clearly.

I am still -1 on anything that specifies the place a configuration is
hosted inside the configuration definition itself. Because configurations
are encapsulated by servers, it makes more sense to me that the servers
(or server groups) would specify their configurations. If changing to a
more logical model is just too hard for TOSCA to adapt to, then I suggest
this be an area that TOSCA differs from Heat. We don't need two models
for communicating configurations to servers, and I'd prefer Heat stay
focused on making HOT template authors' and users' lives better.

I have seen an alternative approach which separates a configuration
definition from a configuration deployer. This at least makes it clear
that the configuration is a part of a server. In pseudo-HOT:

resources:
  WebConfig:
type: OS::Heat::ChefCookbook
properties:
  cookbook_url: https://some.test/foo
  parameters:
endpoint_host:
  type: string
  WebServer:
type: OS::Nova::Server
properties:
  image: webserver
  flavor: 100
  DeployWebConfig:
type: OS::Heat::ConfigDeployer
properties:
  configuration: {get_resource: WebConfig}
  on_server: {get_resource: WebServer}
  parameters:
endpoint_host: {get_attribute: [ WebServer, first_ip]}

I have implementation questions about both of these approaches though,
as it appears they'd have to reach backward in the graph to insert
their configuration, or have a generic bucket for all configuration
to be inserted. IMO that would look a lot like the method I proposed,
which was to just have a list of components attached directly to the
server like this:

components:
  WebConfig:
type: Chef::Cookbook
properties:
  cookbook_url: https://some.test/foo
  parameters:
endpoing_host:
  type: string
resources:
  WebServer:
type: OS::Nova::Server
properties:
  image: webserver
  flavor: 100
components:
  - webconfig:
component: {get_component: WebConfig}
parameters:
  endpoint_host: {get_attribute: [ WebServer, first_ip ]}

Of course, the keen eye will see the circular dependency there with the
WebServer trying to know its own IP. We've identified quite a few use
cases for self-referencing attributes, so that is a separate problem we
should solve independent of the template composition problem.

Anyway, I prefer the idea that parse-time things are called components
and run-time things are resources. I don't need a database entry for
WebConfig above. It is in the template and entirely static, just
sitting there as a reusable chunk for servers to pull in as-needed.

Anyway, I don't feel that we resolved any of these issues in the session
about configuration at the summit. If we did, we did not record them
in the etherpad or the blueprint. We barely got through the prepared
list of requirements and only were able to spell out problems, not
any solutions. So forgive me if I missed something and want to keep on
discussing this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-12 Thread Angus Salkeld

On 12/11/13 10:32 -0800, Clint Byrum wrote:

Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:


Hi all,

I have just posted the following wiki page to reflect a refined proposal
for HOT software configuration based on discussions at the design summit
last week. Angus also put a sample up in an etherpad last week, but we did
not have enough time to go thru it in the design session. My write-up is
based on Angus' sample, actually a refinement, and on discussions we had in
breaks, plus it is trying to reflect all the good input from ML discussions
and Steve Baker's initial proposal.

https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP

Please review and provide feedback.


Hi Thomas, thanks for spelling this out clearly.

I am still -1 on anything that specifies the place a configuration is
hosted inside the configuration definition itself. Because configurations
are encapsulated by servers, it makes more sense to me that the servers
(or server groups) would specify their configurations. If changing to a
more logical model is just too hard for TOSCA to adapt to, then I suggest
this be an area that TOSCA differs from Heat. We don't need two models
for communicating configurations to servers, and I'd prefer Heat stay
focused on making HOT template authors' and users' lives better.

I have seen an alternative approach which separates a configuration
definition from a configuration deployer. This at least makes it clear
that the configuration is a part of a server. In pseudo-HOT:

resources:
 WebConfig:
   type: OS::Heat::ChefCookbook
   properties:
 cookbook_url: https://some.test/foo
 parameters:
   endpoint_host:
 type: string
 WebServer:
   type: OS::Nova::Server
   properties:
 image: webserver
 flavor: 100
 DeployWebConfig:
   type: OS::Heat::ConfigDeployer
   properties:
 configuration: {get_resource: WebConfig}
 on_server: {get_resource: WebServer}
 parameters:
   endpoint_host: {get_attribute: [ WebServer, first_ip]}



This is what Thomas defined, with one optimisation.
- The webconfig is a yaml template.

As you say the component is static - if so why even put it inline in
the template (well that was my thinking, it seems like a template not
really a resource).



I have implementation questions about both of these approaches though,
as it appears they'd have to reach backward in the graph to insert
their configuration, or have a generic bucket for all configuration


Yeah, it does depend on the implementation. If we use Mistral the
agent will need to ask Mistral for the tasks that apply to the server.

$ mistral task-consume \
 --tags=instance_id=$(my_instance_id);stack_id=$(stack_id)



to be inserted. IMO that would look a lot like the method I proposed,
which was to just have a list of components attached directly to the
server like this:

components:
 WebConfig:
   type: Chef::Cookbook
   properties:
 cookbook_url: https://some.test/foo
 parameters:
   endpoing_host:
 type: string
resources:
 WebServer:
   type: OS::Nova::Server
   properties:
 image: webserver
 flavor: 100
   components:
 - webconfig:
   component: {get_component: WebConfig}
   parameters:
 endpoint_host: {get_attribute: [ WebServer, first_ip ]}

I'd change this to:

components:
  - webconfig:
component: {get_file: ./my_configs/webconfig.yaml}
parameters:
  endpoint_host: {get_attribute: [ WebServer, first_ip ]}

This *could* be a short hand notation like the volumes property on
aws instances.



Of course, the keen eye will see the circular dependency there with the
WebServer trying to know its own IP. We've identified quite a few use
cases for self-referencing attributes, so that is a separate problem we
should solve independent of the template composition problem.


(aside) I don't like the idea of self ref as it breaks the idea that
references are resolved top down. Basically we have to put in
a nasty hack to produce broken behaviour (first resolution is
bogus and only following resoultions are possibly correct).

In this case just use the deployer to break your circular dep?



Anyway, I prefer the idea that parse-time things are called components
and run-time things are resources. I don't need a database entry for
WebConfig above. It is in the template and entirely static, just
sitting there as a reusable chunk for servers to pull in as-needed.


IMO is should just be a template/formatted file.



Anyway, I don't feel that we resolved any of these issues in the session
about configuration at the summit. If we did, we did not record them
in the etherpad or the blueprint. We barely got through the prepared
list of requirements and only were able to spell out problems, not
any solutions. So forgive me if I missed something and want to keep on
discussing this.

___
OpenStack-dev mailing list