On 10/10/13 11:59 +0400, Stan Lagun wrote:
This rises number of questions:

1. What about conditional dependencies? Like config3 depends on config1 AND
config2 OR config3.

We have the AND, but not an OR. To depend on two resources you just
have 2 references to the 2 resources.

2. How do I pass values between configs? For example config1 requires value
from user input and config2 needs an output value obtained from applying

{Fn::GetAtt: [config2, the_name_of_the_attribute]}

3. How would you do error handling? For example config3 on server3 requires
config1 to be applied on server1 and config2 on server2. Suppose that there
was an error while applying config2 (and config1 succeeded). How do I
specify reaction for that? Maybe I need then to try to apply config4 to
server2 and continue or maybe just roll everything back

We currently have no "on_error" but it is not out of scope. The
current action is either to rollback the stack or leave it in the
failed state (depending on what you choose).

4. How these config dependencies play with nested stacks and resources like
LoadBalancer that create such stacks? How do I specify that myConfig
depends on HA proxy being configured if that config was declared in nested
stack that is generated by resource's Python code and is not declared in my
HOT template?

It is normally based on the actual data/variable that you are
dependant on.
loadbalancer: depends on autoscaling instance_list
(actually in the loadbalancer config would be a "GetAtt: [scalegroup, 

Then if you want to depend on that config you could depend on an
attribute of that resource that changes on reconfigure.

  type: OS::SoftwareConfig::Ssh
    script: {GetAtt: [scalegroup, InstanceList]}
    hosted_on: loadbalancer

  type: OS::SoftwareConfig::Ssh
    script: {GetAtt: [config1, ConfigAppliedCount]}
    hosted_on: somewhere_else

I am sure we could come up with some better syntax for this. But
the logic seems easily possible to me.

As far as nested stacks go: you just need an output to be useable
externally - basically design your API.

5. The solution is not generic. For example I want to write HOT template
for my custom load-balancer and a scalable web-servers group. Load balancer
config depends on all configs of web-servers. But web-servers are created
dynamically (autoscaling). That means dependency graph needs to be also
dynamically modified. But if you explicitly list config names instead of
something like "depends on all configs of web-farm X" you have no way to
describe such rule. In other words we need generic dependency, not just
dependency on particular config

Why won't just depending on the scaling group be enough? if it needs
to be updated it will update all within the group before progressing
to the dependants.

6. What would you do on STACK UPDATE that modifies the dependency graph?

The notation of configs and there

What we normally do go through the resources, see what can be updated:
- without replacement
- needs deleting
- is new
- requires updating

Each resource type can define what will require replacing or not.

I think we can achieve what you want with some small improvements to
the HOT format and with some new resource types - IMHO.


On Thu, Oct 10, 2013 at 4:25 AM, Angus Salkeld <asalk...@redhat.com> wrote:

On 09/10/13 19:31 +0100, Steven Hardy wrote:

On Wed, Oct 09, 2013 at 06:59:22PM +0200, Alex Rudenko wrote:

Hi everyone,

I've read this thread and I'd like to share some thoughts. In my opinion,
workflows (which run on VMs) can be integrated with heat templates as

   1. workflow definitions should be defined separately and processed by
   stand-alone workflow engines (chef, puppet etc).

I agree, and I think this is the direction we're headed with the
software-config blueprints - essentially we should end up with some new
Heat *resources* which encapsulate software configuration.


I think we need a software-configuration-aas sub-project that knows
how to take puppet/chef/salt/... config and deploy it. Then Heat just
has Resources for these (OS::SoftwareConfig::Puppet).
We should even move our WaitConditions and Metadata over to that
yet-to-be-made service so that Heat is totally clean of software config.

How would this solve ordering issues:

   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   depends_on: config3
   type: OS::SoftwareConfig::Puppet
   hosted_on: server2
   depends_on: config1
   type: OS::Nova::Server
   type: OS::Nova::Server

Heat knows all about ordering:
It starts the resources:
server1, server2

There is the normal contract in the client:
we post the config to software-config-service
and we wait for the state == ACTIVE (when the config is applied)
before progressing to a resource that is dependant on it.


IMO there is some confusion around the scope of HOT, we should not be
adding functionality to it which already exists in established config
management tools IMO, instead we should focus on better integration with
exisitng tools at the resource level, and identifying template interfaces
which require more flexibility (for example serialization primitives)

    2. the HOT resources should reference workflows which they require,
   specifying a type of workflow and the way to access a workflow
   The workflow definition might be provided along with HOT.

So again, I think this acatually has very little to do with HOT.  The
*Heat* resources may define software configuration, or possibly some sort
of workflow, which is acted upon by $thing which is not Heat.

So in the example provided by the OP, maybe you'd have a Murano resource,
which knows how to define the input to the Murano API, which might trigger
workflow type actions to happen in the Murano service.

    3. Heat should treat the orchestration templates as transactions (i.e.
   Heat should be able to rollback in two cases: 1) if something goes
   during processing of an orchestration workflow 2) when a stand-alone
   workflow engine reports an error during processing of a workflow
   with a resource)

So we already have the capability for resources to recieve signals, which
would allow (2) in the asynchronous case.  But it seems to me that this is
still a serialization problem, ie a synchronous case, therefore (2) is
part of (1).


- Heat stack create starts
- Murano resource created (CREATE IN_PROGRESS state)
- Murano workdlow stuff happens, signals Heat with success/failure
- Murano resource transitions to either COMPLETE or FAILED state
- If a FAILED state happened, e.g on update, we can roll back to the
 previous stack definition (this is already possible in Heat)

    4. Heat should expose an API which enables basic communication between
   running workflows. Additionally, Heat should provide an API to
   that allows workflows to specify whether they completed successfully
   not. The reference to these APIs should be passed to the workflow
   that is responsible for executing workflows on VMs.

I personally don't think this is in scope for Heat.  We already have an
which exposes the status of stacks and resources.  Exposing some different
API which describes a workflow implemented by a specific subset of
types makes no sense to me.

Pros of each point:
1 & 2 - keeps Heat simple and gives a possibility to choose the best
workflows and engines among available ones.
3 - adds some kind of all-or-nothing semantics improving the control and
awareness of what's going on inside VMs.
4 - allows workflow synchronization and communication through Heat API.
Provides the error reporting mechanism for workflows. If a workflow does
not need this functionality, it can ignore it.

IMHO (4) is very much a step too far, and is not well aligned with the
current interfaces provided by Heat.

I'm really keen to further discuss the use-cases here, but if possible, it
would be helpful if folks can describe their requirements in less abstract
terms, and with reference to our existing interfaces and template model.

 These thoughts might show some gaps in my understanding of how Heat
but I would like to share them anyway.

You've raised some good points, thanks.

I'm really keen to further discuss the use-cases here, but if possible, it
would be helpful if folks can describe their requirements in less abstract
terms, and with reference to our existing interfaces and template model.
That way we can start defining what is actually missing to support

So far I see the following emerging as definite requirements:
- Better/more flexible native serialization interfaces (possbly HOT
- Better/more flexible *resources* which encapsulate software
 on instances, probably with the flexibility of applying more than one
 config to an instance (not necessarily related to any HOT changes at all)



OpenStack-dev mailing list
OpenStack-dev@lists.openstack.**org <OpenStack-dev@lists.openstack.org>

OpenStack-dev mailing list
OpenStack-dev@lists.openstack.**org <OpenStack-dev@lists.openstack.org>

Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun

OpenStack-dev mailing list

OpenStack-dev mailing list

Reply via email to