Sam-  Really nice!

Two thoughts.  First, I think the default interpretation of a word should
probably be a task in the registry rather than an effector.  These can be
extended, and could even be scoped to a catalog bundle/set (so eg "install"
would define how to install something, and could be used in several
blueprints in a group; "provision" is probably from core Brooklyn
however).  Most other tasks -- such as invoking an effector (or executing
an ssh command or a rest call etc) -- should specify a task type eg `{
type: effector, name: launch }` (optionally passing args in that case).
Some optional shorthands are that a single key could treat the key as the
type (so could write `{effector: launch}` or {ssh: service start mysql}`),
and if the item is a list treat it as a sequence of task definitions (as
you've done).

Secondly I had imagined various complex scopes for mutexes to ensure
automatic release but I wasn't enamored of that approach.  Your mutexed /
subtasks is much cleaner.  There may be times when the semaphore should
survive the task -- so worth having the explicit "acquire-semaphore" task
-- but agree w Svet and you the normal pattern should be simpler.  To
elaborate on your launch example something like:

  launch:
  - ssh: yum install app
  - with-semaphore:
      semaphore: $brooklyn:attributeWhenReady("semaphore.launch")
      subtasks:
       - ssh: service start app
       - wait-for-running

not completely sure where the semaphore itself lives -- the above is
assuming we add a new config/sensor type similar to "port", so that the
semaphore is guaranteed to be populated as a sensor, but a parent can
specify a shared semaphore as config if desired.

("with-semaphore" as a type could be a simple extension of a "try-finally"
task which has a "pre", "main", and "finally" block, with semantics that
"finally" is always invoked, even if main is cancelled or fails ... or
alternatively the semaphore class could be "owner-aware" so if someone else
requests it, it will check whether the owners are still alive, so if the
owner is a task and that task is cancelled, it acts as if the owner had
cleared it)

Best
Alex


On 11 January 2017 at 16:20, Sam Corbett <[email protected]>
wrote:

> +1 to Alex's core suggestion. It would be a powerful extension to
> Brooklyn. We should not force blueprint authors into arbitrary structures
> that were only a good fit for someone else's scenario in the past.
>
> To lob another suggestion into the mix, I imagine writing:
>
> effectors:
>   # default named phases, not needed normally.
>   start: [provision, install, customise, launch]
>   # launch phase overridden to wrap latter half with mutex handling
>   launch:
>   - ssh-task-1
>   - mutexed
>     subtasks:
>     - ssh-task-2
>     - wait-for-running
>
> This implies that each of the phases of start is another effector and that
> they are configurable objects. The `mutexed` task cleans up properly when
> its subtasks fail.
>
> Sam
>
>
>
> On 11/01/2017 15:55, Svetoslav Neykov wrote:
>
>> Alex,
>>
>> I like the task based approach and agree it's something we need to push
>> for. I'm not convinced that locking-like behaviour should be represented as
>> tasks though. It brings procedural style flavour to concurrency - for
>> example in your original example there's a separate tasks for acquiring and
>> releasing the lock. They could easily get misplaced, overwritten,
>> forgotten, etc. Not clear how releasing it works in case of errors.
>> What I'd prefer is more declarative approach where the tasks are grouped
>> and the locking requirements are applied on the group. At worst referencing
>> a start task and optionally an end task (with the default of the parent
>> task existing).
>>
>> Plugging into the lifecycle of entities has other use-cases. No matter
>> how an entity is defined - whether using the current monolithic START
>> effector or one composited of smaller tasks - there's no way to be notified
>> when its tasks get executed. Some concrete examples - the
>> "SystemServiceEnricher" which looks for the "launch" task and can be
>> applied on any "SoftwareProcessEnity"; An entity which needs to do some
>> cleanup based on the shutdown of another entity (DNS blueprints); latching
>> at arbitrary points during entity lifecycle; etc.
>>
>> One of the alternatives I mention in the original email (Effector
>> execution notifications) is a step in the "task based" approach direction.
>> It's still in Java, but we are splitting the monolith effectors into
>> smaller building blocks, which could in future get reused in YAML.
>>
>> So to summarise - +1 for tasks as building blocks, but still need
>> visibility into the executing blocks.
>>
>> PS - as a near-term option if needed we could extend SoftwareProcess
>>> LATCH to do something special if the config/sensor it is given is a
>>> "Semaphore" type
>>>
>> What do you think the behaviour should be here - releasing the semaphore
>> after the corresponding step completes or at the end of the wrapping
>> effector? I think this is defined by the blueprint authors. And making this
>> configurable adds even more complexity. Instead could invest in developing
>> the above functionality.
>>
>> Svet.
>>
>>
>>
>> On 11.01.2017 г., at 16:25, Alex Heneveld <[email protected]
>>> om> wrote:
>>>
>>>
>>> svet, all,
>>>
>>> lots of good points.
>>>
>>> the idea of "lifecycle phases" in our software process entities has
>>> grown to be something of a monster, in my opinion.  they started as a small
>>> set of conventions but they've grown to the point where it's the primary
>>> hook for yaml and people are wanting "pre.pre.install".
>>>
>>> it's (a) misguided since for any value of N, N phases is too few and (b)
>>> opaque coming from a java superclass.
>>>
>>> a lot of things rely on the current SoftwareProcess so not saying we
>>> kill it, but for the community it would be much healthier (in terms of
>>> consumers) to allow multiple strategies and especially focus on *the
>>> reusability of tasks in YAML* -- eg "provision" or "install template files"
>>> or "create run dir" -- so people can write rich blueprints that don't
>>> require magic lifecycle phases from superclasses.
>>>
>>> the "init.d" numbered approach svet cites is one way these are wired
>>> together, with extensions (child types) able to insert new numbered steps
>>> or override with the number and label.  but that would be one strategy,
>>> there might be simpler ones that are just a list, or other sequencing
>>> strategies like precondition/action/postcondition or
>>> runs-before/runs-after (where to extend something, you can say
>>> `my-pre-customize: { do: something, run-before: launch, run-after:
>>> customize }`).
>>>
>>> we're not sure exactly how that would look but wrt mutex logic the idea
>>> that adjuncts plug into the entity lifecycles feels wrong, like it's
>>> pushing for more standardisation in lifecycle phases where things can plug
>>> in.   whereas with a task approach we can have effector definitions being
>>> explicit about synchronization, which i think is better.  and if they want
>>> to make that configurable/pluggable they can do this and be explicit about
>>> how that is done (for instance it might take a config param, even a config
>>> param (or child or relation) which is an entity and call an effector on
>>> that if set).
>>>
>>> concretely in the examples i'm saying instead of ENRICHER APPROACH
>>>
>>>         memberSpec:
>>>           $brooklyn:entitySpec:
>>>             type: cluster-member
>>>             brooklyn.enrichers:
>>>             - type: org.apache.brooklyn.enricher.s
>>> tock.AquirePermissionToProceed
>>>               brooklyn.config:
>>>                 stage: post.provisioning
>>>                 # lifecycle stage "post.provisioning" is part of
>>> cluster-member and
>>>                 # the enricher above understands those stages
>>>
>>> we concentrate on a way to define tasks and extend them, so that we
>>> could instead have a TASK APPROACH:
>>>
>>>         memberSpec:
>>>           $brooklyn:entitySpec:
>>>             type: cluster-member
>>>             effectors:
>>>               start:
>>>                 035-pre-launch-get-semaphore: { acquire-semaphore: ... }
>>>                 # assume 040-launch is defined in the parent "start"
>>> yaml defn
>>>                 # using a new hypothetical "initd" yaml-friendly task
>>> factory,
>>>                 # acquire-semaphore is a straightforward task;
>>>                 # scope can come from parent/ancestor task,
>>>                 # also wanted to ensure mutex is not kept on errors
>>>
>>> or
>>>
>>>         memberSpec:
>>>           $brooklyn:entitySpec:
>>>             type: cluster-member
>>>             effectors:
>>>               start:
>>>                 my-pre-launch:
>>>                   task:  { acquire-semaphore: ... }
>>>                   run-before: launch
>>>                   run-after: customize
>>>                   # launch and customize defined in the parent "start"
>>> yaml defn,
>>>                   # using a new hypothetical "ordered-labels"
>>> yaml-friendly task factory;
>>>                   # acquire-semaphore and scope are as in the "initd"
>>> example
>>>
>>>
>>> both approaches need a little bit of time to get your head around the
>>> new concepts but the latter approach is much more powerful and general.
>>> there's a lot TBD but the point i'm making is that *if we make tasks easier
>>> to work with in yaml, it becomes more natural to express concurrency
>>> control as tasks*.
>>>
>>>
>>> PS - as a near-term option if needed we could extend SoftwareProcess
>>> LATCH to do something special if the config/sensor it is given is a
>>> "Semaphore" type
>>>
>>> best
>>> alex
>>>
>>>
>

Reply via email to