[openstack-dev] [Mistral] Workflow on-finish

2014-08-27 Thread W Chan
Renat,

It will be helpful to perform a callback on completion of the async
workflow.  Can we add on-finish to the workflow spec and when workflow
completes, runs task(s) defined in the on-finish section of the spec?  This
will allow the workflow author to define how the callback is to be done.

Here's the bp link.
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-on-finish

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Workflow on-finish

2014-08-28 Thread W Chan
Is there an example somewhere that I can reference on how to define this
special task?  Thanks!


On Wed, Aug 27, 2014 at 10:02 PM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 Right now, you can just include a special task into a workflow that, for
 example, sends an HTTP request to whatever you need to notify about
 workflow completion. Although, I see it rather as a hack (not so horrible
 though).

 Renat Akhmerov
 @ Mirantis Inc.



 On 28 Aug 2014, at 12:01, Renat Akhmerov rakhme...@mirantis.com wrote:

 There are two blueprints that I supposed to use for this purpose:
 https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http
 https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-amqp

 So my opinion:

- This functionality should be orthogonal to what we configure in DSL.
- The mechanism of listeners would is more generic and would your
requirement as a special case.
- At this point, I see that we may want to implement a generic
transport-agnostic listener mechanism internally (not that hard task) and
then implement required transport specific plugins to it.


 Inviting everyone to discussion.

 Thanks

 Renat Akhmerov
 @ Mirantis Inc.



 On 28 Aug 2014, at 06:17, W Chan m4d.co...@gmail.com wrote:

 Renat,

 It will be helpful to perform a callback on completion of the async
 workflow.  Can we add on-finish to the workflow spec and when workflow
 completes, runs task(s) defined in the on-finish section of the spec?  This
 will allow the workflow author to define how the callback is to be done.

 Here's the bp link.
 https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-on-finish

 Thanks.
 Winson

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-24 Thread W Chan
Renat,

Regarding your comments on change https://review.openstack.org/#/c/75609/,
I don't think the port to oslo.messaging is just a swap from pika to
oslo.messaging.  OpenStack services as I understand is usually implemented
as an RPC client/server over a messaging transport.  Sync vs async calls
are done via the RPC client call and cast respectively.  The messaging
transport is abstracted and concrete implementation is done via
drivers/plugins.  So the architecture of the executor if ported to
oslo.messaging needs to include a client, a server, and a transport.  The
consumer (in this case the mistral engine) instantiates an instance of the
client for the executor, makes the method call to handle task, the client
then sends the request over the transport to the server.  The server picks
up the request from the exchange and processes the request.  If cast
(async), the client side returns immediately.  If call (sync), the client
side waits for a response from the server over a reply_q (a unique queue
for the session in the transport).  Also, oslo.messaging allows versioning
in the message. Major version change indicates API contract changes.  Minor
version indicates backend changes but with API compatibility.

So, where I'm headed with this change...  I'm implementing the basic
structure/scaffolding for the new executor service using oslo.messaging
(default transport with rabbit).  Since the whole change will take a few
rounds, I don't want to disrupt any changes that the team is making at the
moment and so I'm building the structure separately.  I'm also adding
versioning (v1) in the module structure to anticipate any versioning
changes in the future.   I expect the change request will lead to some
discussion as we are doing here.  I will migrate the core operations of the
executor (handle_task, handle_task_error, do_task_action) to the server
component when we agree on the architecture and switch the consumer
(engine) to use the new RPC client for the executor instead of sending the
message to the queue over pika.  Also, the launcher for
./mistral/cmd/task_executor.py will change as well in subsequent round.  An
example launcher is here
https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
 The interceptor project here is what I use to research how oslo.messaging
works.  I hope this is clear. The blueprint only changes how the request
and response are being transported.  It shouldn't change how the executor
currently works.

Finally, can you clarify the difference between local vs scalable engine?
 I personally do not prefer to explicitly name the engine scalable because
this requirement should be in the engine by default and we do not need to
explicitly state/separate that.  But if this is a roadblock for the change,
I can put the scalable structure back in the change to move this forward.

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread W Chan
As I understand, the local engine runs the task immediately whereas the
scalable engine sends it over the message queue to one or more executors.

In what circumstances would we see a Mistral user using a local engine
(other than testing) instead of the scalable engine?

If we are keeping the local engine, can we move the abstraction to the
executor instead, having drivers for a local executor and remote executor?
 The message flow from the engine to the executor would be consistent, it's
just where the request will be processed.

And since we are porting to oslo.messaging, there's already a fake driver
that allows for an in process Queue for local execution.  The local
executor can be a derivative of that fake driver for non-testing purposes.
 And if we don't want to use an in process queue here to avoid the
complexity, we can have the client side module of the executor determine
whether to dispatch to a local executor vs. RPC call to a remote executor.

Thoughts?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Plugin architecture for custom actions?

2014-02-24 Thread W Chan
Will Mistral be supporting custom actions developed by users?  If so,
should the Actions module be refactored to individual plugins with a
dynamic process for action type mapping/lookup?

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-25 Thread W Chan
Thanks.  I will do that today and follow up with a description of the
proposal.


On Mon, Feb 24, 2014 at 10:21 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 In process is fine to me.

 Winson, please register a blueprint for this change and put the link in
 here so that everyone can see what it all means exactly. My feeling is that
 we can approve and get it done pretty soon.

 Renat Akhmerov
 @ Mirantis Inc.



 On 25 Feb 2014, at 12:40, Dmitri Zimine d...@stackstorm.com wrote:

  I agree with Winson's points. Inline.
 
  On Feb 24, 2014, at 8:31 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:
 
 
  On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
 
  As I understand, the local engine runs the task immediately whereas
 the scalable engine sends it over the message queue to one or more
 executors.
 
  Correct.
 
  Note: that local is confusing here, in process will reflect what it
 is doing better.
 
 
  In what circumstances would we see a Mistral user using a local engine
 (other than testing) instead of the scalable engine?
 
  Yes, mostly testing we it could be used for demonstration purposes also
 or in the environments where installing RabbitMQ is not desirable.
 
  If we are keeping the local engine, can we move the abstraction to the
 executor instead, having drivers for a local executor and remote executor?
  The message flow from the engine to the executor would be consistent, it's
 just where the request will be processed.
 
  I think I get the idea and it sounds good to me. We could really have
 executor in both cases but the transport from engine to executor can be
 different. Is that what you're suggesting? And what do you call driver here?
 
  +1 to abstraction to the executor, indeed the local and remote engines
 today differ only by how they invoke executor, e.g. transport / driver.
 
 
  And since we are porting to oslo.messaging, there's already a fake
 driver that allows for an in process Queue for local execution.  The local
 executor can be a derivative of that fake driver for non-testing purposes.
  And if we don't want to use an in process queue here to avoid the
 complexity, we can have the client side module of the executor determine
 whether to dispatch to a local executor vs. RPC call to a remote executor.
 
  Yes, that sounds interesting. Could you please write up some etherpad
 with details explaining your idea?
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread W Chan
Sure.  Let me give this some thoughts and work with you separately.  Before
we speak up, we should have a proposal for discussion.


On Mon, Feb 24, 2014 at 9:53 PM, Dmitri Zimine d...@stackstorm.com wrote:

 Winson,

 While you're looking into this and working on the design, may be also
 think through other executor/engine communications.

 We talked about executor communicating to engine over 3 channels (DB,
 REST, RabbitMQ) which I wasn't happy about ;) and put it off for some time.
 May be it can be rationalized as part of your design.

 DZ.

 On Feb 24, 2014, at 11:21 AM, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding your comments on change https://review.openstack.org/#/c/75609/,
 I don't think the port to oslo.messaging is just a swap from pika to
 oslo.messaging.  OpenStack services as I understand is usually implemented
 as an RPC client/server over a messaging transport.  Sync vs async calls
 are done via the RPC client call and cast respectively.  The messaging
 transport is abstracted and concrete implementation is done via
 drivers/plugins.  So the architecture of the executor if ported to
 oslo.messaging needs to include a client, a server, and a transport.  The
 consumer (in this case the mistral engine) instantiates an instance of the
 client for the executor, makes the method call to handle task, the client
 then sends the request over the transport to the server.  The server picks
 up the request from the exchange and processes the request.  If cast
 (async), the client side returns immediately.  If call (sync), the client
 side waits for a response from the server over a reply_q (a unique queue
 for the session in the transport).  Also, oslo.messaging allows versioning
 in the message. Major version change indicates API contract changes.  Minor
 version indicates backend changes but with API compatibility.

 So, where I'm headed with this change...  I'm implementing the basic
 structure/scaffolding for the new executor service using oslo.messaging
 (default transport with rabbit).  Since the whole change will take a few
 rounds, I don't want to disrupt any changes that the team is making at the
 moment and so I'm building the structure separately.  I'm also adding
 versioning (v1) in the module structure to anticipate any versioning
 changes in the future.   I expect the change request will lead to some
 discussion as we are doing here.  I will migrate the core operations of the
 executor (handle_task, handle_task_error, do_task_action) to the server
 component when we agree on the architecture and switch the consumer
 (engine) to use the new RPC client for the executor instead of sending the
 message to the queue over pika.  Also, the launcher for
 ./mistral/cmd/task_executor.py will change as well in subsequent round.  An
 example launcher is here
 https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
  The interceptor project here is what I use to research how oslo.messaging
 works.  I hope this is clear. The blueprint only changes how the request
 and response are being transported.  It shouldn't change how the executor
 currently works.

 Finally, can you clarify the difference between local vs scalable engine?
  I personally do not prefer to explicitly name the engine scalable because
 this requirement should be in the engine by default and we do not need to
 explicitly state/separate that.  But if this is a roadblock for the change,
 I can put the scalable structure back in the change to move this forward.

 Thanks.
 Winson

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-25 Thread W Chan
The following link is the google doc of the proposed engine/executor
message flow architecture.
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine
sends requests over a transport to one or more executors.  The executor
client, transport, and executor server follows the RPC client/server design
patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpcin
oslo.messaging.

The diagram represents the local engine.  In reality, it's following the
same RPC client/server design pattern.  The only difference is that it'll
be configured to use a
fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.pyRPC
backend driver.  The fake driver uses in process
queues http://docs.python.org/2/library/queue.html#module-Queue shared
between a pair of engine and executor.

The following are the stepwise changes I will make.
1) Keep the local and scalable engine structure intact.  Create the
Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
the Executor Server at ./mistral/engine/scalable/executor/service.py and
implement the task operations under
./mistral/engine/scalable/executor/executor.py.  Delete
./mistral/engine/scalable/executor/executor.py.  Modify the launcher
./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
to use the Executor Client instead of sending the message directly to
rabbit via pika.  The sum of this is the atomic change that keeps existing
structure and without breaking the code.
2) Remove the local engine.
https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
3) Implement versioning for the engine.
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
4) Port abstract engine to use oslo.messaging and implement the engine
client, engine server, and modify the API layer to consume the engine
client.
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
.

Winson


On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding your comments on change https://review.openstack.org/#/c/75609/,
 I don't think the port to oslo.messaging is just a swap from pika to
 oslo.messaging.  OpenStack services as I understand is usually implemented
 as an RPC client/server over a messaging transport.  Sync vs async calls
 are done via the RPC client call and cast respectively.  The messaging
 transport is abstracted and concrete implementation is done via
 drivers/plugins.  So the architecture of the executor if ported to
 oslo.messaging needs to include a client, a server, and a transport.  The
 consumer (in this case the mistral engine) instantiates an instance of the
 client for the executor, makes the method call to handle task, the client
 then sends the request over the transport to the server.  The server picks
 up the request from the exchange and processes the request.  If cast
 (async), the client side returns immediately.  If call (sync), the client
 side waits for a response from the server over a reply_q (a unique queue
 for the session in the transport).  Also, oslo.messaging allows versioning
 in the message. Major version change indicates API contract changes.  Minor
 version indicates backend changes but with API compatibility.


 My main concern about this patch is not related with messaging
 infrastructure. I believe you know better than me how it should look like.
 I'm mostly concerned with the way of making changes you chose. From my
 perspective, it's much better to make atomic changes where every changes
 doesn't affect too much in existing architecture. So the first step could
 be to change pika to oslo.messaging with minimal structural changes without
 introducing versioning (could be just TODO comment saying that the
 framework allows it and we may want to use it in the future, to be decide),
 without getting rid of the current engine structure (local, scalable). Some
 of the things in the file structure and architecture came from the
 decisions made by many people and we need to be careful about changing them.


 So, where I'm headed with this change...  I'm implementing the basic
 structure/scaffolding for the new executor service using oslo.messaging
 (default transport with rabbit).  Since the whole change will take a few
 rounds, I don't want to disrupt any changes that the team is making at the
 moment and so I'm building the structure separately.  I'm also adding
 versioning (v1) in the module structure to anticipate any versioning
 changes in the future.   I expect the change request will lead to some
 discussion as we are doing here.  I will migrate the core operations of the
 executor (handle_task, handle_task_error, do_task_action) to the server
 component when we agree on the architecture and switch the consumer

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-26 Thread W Chan
Thanks.  I'll start making the changes.  The other transport currently
implemented at oslo.messaging is located at
https://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/_drivers,
prefixed with impl.  There are quid and zmq.


On Wed, Feb 26, 2014 at 12:03 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Winson, nice job!

 Now it totally makes sense to me. You're good to go with this unless
 others have objections.

 Just one technical dummy question (sorry, I'm not yet familiar with
 oslo.messaging): at your picture you have Transport, so what can be
 specifically except RabbitMQ?

 Renat Akhmerov
 @ Mirantis Inc.



 On 26 Feb 2014, at 14:30, Nikolay Makhotkin nmakhot...@mirantis.com
 wrote:

 Looks good. Thanks, Winson!

 Renat, What do you think?


 On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:

 The following link is the google doc of the proposed engine/executor
 message flow architecture.
 https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

 The diagram on the right is the scalable engine where one or more engine
 sends requests over a transport to one or more executors.  The executor
 client, transport, and executor server follows the RPC client/server
 design 
 patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpcin
  oslo.messaging.

 The diagram represents the local engine.  In reality, it's following the
 same RPC client/server design pattern.  The only difference is that it'll
 be configured to use a 
 fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.pyRPC
  backend driver.  The fake driver uses in process
 queues http://docs.python.org/2/library/queue.html#module-Queue shared
 between a pair of engine and executor.

 The following are the stepwise changes I will make.
 1) Keep the local and scalable engine structure intact.  Create the
 Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
 the Executor Server at ./mistral/engine/scalable/executor/service.py and
 implement the task operations under
 ./mistral/engine/scalable/executor/executor.py.  Delete
 ./mistral/engine/scalable/executor/executor.py.  Modify the launcher
 ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
 to use the Executor Client instead of sending the message directly to
 rabbit via pika.  The sum of this is the atomic change that keeps existing
 structure and without breaking the code.
 2) Remove the local engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
 3) Implement versioning for the engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
 4) Port abstract engine to use oslo.messaging and implement the engine
 client, engine server, and modify the API layer to consume the engine
 client.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 .

 Winson


 On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov 
 rakhme...@mirantis.comwrote:


 On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding your comments on change
 https://review.openstack.org/#/c/75609/, I don't think the port to
 oslo.messaging is just a swap from pika to oslo.messaging.  OpenStack
 services as I understand is usually implemented as an RPC client/server
 over a messaging transport.  Sync vs async calls are done via the RPC
 client call and cast respectively.  The messaging transport is abstracted
 and concrete implementation is done via drivers/plugins.  So the
 architecture of the executor if ported to oslo.messaging needs to include a
 client, a server, and a transport.  The consumer (in this case the mistral
 engine) instantiates an instance of the client for the executor, makes the
 method call to handle task, the client then sends the request over the
 transport to the server.  The server picks up the request from the exchange
 and processes the request.  If cast (async), the client side returns
 immediately.  If call (sync), the client side waits for a response from the
 server over a reply_q (a unique queue for the session in the transport).
  Also, oslo.messaging allows versioning in the message. Major version
 change indicates API contract changes.  Minor version indicates backend
 changes but with API compatibility.


 My main concern about this patch is not related with messaging
 infrastructure. I believe you know better than me how it should look like.
 I'm mostly concerned with the way of making changes you chose. From my
 perspective, it's much better to make atomic changes where every changes
 doesn't affect too much in existing architecture. So the first step could
 be to change pika to oslo.messaging with minimal structural changes without
 introducing versioning (could be just TODO comment saying that the
 framework allows it and we may want to use it in the future, to be decide),
 without getting rid of the current engine structure (local

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread W Chan
) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, February 25, 2014 at 11:30 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to
 oslo.messaging

   Looks good. Thanks, Winson!

  Renat, What do you think?


 On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:

 The following link is the google doc of the proposed engine/executor
 message flow architecture.
 https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

  The diagram on the right is the scalable engine where one or more
 engine sends requests over a transport to one or more executors.  The
 executor client, transport, and executor server follows the RPC
 client/server design 
 patternhttps://github.com/openstack/oslo.messaging/tree/master/oslo/messaging/rpcin
  oslo.messaging.

  The diagram represents the local engine.  In reality, it's following
 the same RPC client/server design pattern.  The only difference is that
 it'll be configured to use a 
 fakehttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_fake.pyRPC
  backend driver.  The fake driver uses in process
 queues http://docs.python.org/2/library/queue.html#module-Queue shared
 between a pair of engine and executor.

  The following are the stepwise changes I will make.
 1) Keep the local and scalable engine structure intact.  Create the
 Executor Client at ./mistral/engine/scalable/executor/client.py.  Create
 the Executor Server at ./mistral/engine/scalable/executor/service.py and
 implement the task operations under
 ./mistral/engine/scalable/executor/executor.py.  Delete
 ./mistral/engine/scalable/executor/executor.py.  Modify the launcher
 ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py
 to use the Executor Client instead of sending the message directly to
 rabbit via pika.  The sum of this is the atomic change that keeps existing
 structure and without breaking the code.
 2) Remove the local engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
 3) Implement versioning for the engine.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
 4) Port abstract engine to use oslo.messaging and implement the engine
 client, engine server, and modify the API layer to consume the engine
 client.
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 .

  Winson


  On Mon, Feb 24, 2014 at 8:07 PM, Renat Akhmerov 
 rakhme...@mirantis.comwrote:


   On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:

  Renat,

  Regarding your comments on change
 https://review.openstack.org/#/c/75609/, I don't think the port to
 oslo.messaging is just a swap from pika to oslo.messaging.  OpenStack
 services as I understand is usually implemented as an RPC client/server
 over a messaging transport.  Sync vs async calls are done via the RPC
 client call and cast respectively.  The messaging transport is abstracted
 and concrete implementation is done via drivers/plugins.  So the
 architecture of the executor if ported to oslo.messaging needs to include a
 client, a server, and a transport.  The consumer (in this case the mistral
 engine) instantiates an instance of the client for the executor, makes the
 method call to handle task, the client then sends the request over the
 transport to the server.  The server picks up the request from the exchange
 and processes the request.  If cast (async), the client side returns
 immediately.  If call (sync), the client side waits for a response from the
 server over a reply_q (a unique queue for the session in the transport).
  Also, oslo.messaging allows versioning in the message. Major version
 change indicates API contract changes.  Minor version indicates backend
 changes but with API compatibility.


  My main concern about this patch is not related with messaging
 infrastructure. I believe you know better than me how it should look like.
 I'm mostly concerned with the way of making changes you chose. From my
 perspective, it's much better to make atomic changes where every changes
 doesn't affect too much in existing architecture. So the first step could
 be to change pika to oslo.messaging with minimal structural changes without
 introducing versioning (could be just TODO comment saying that the
 framework allows it and we may want to use it in the future, to be decide),
 without getting rid of the current engine structure (local, scalable). Some
 of the things in the file structure and architecture came from the
 decisions made by many people and we need to be careful about changing them.


  So, where I'm headed with this change...  I'm implementing the basic
 structure/scaffolding for the new executor service using oslo.messaging
 (default transport with rabbit).  Since the whole change will take a few
 rounds, I don't want to disrupt any changes

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread W Chan
I want to propose the following changes to implement the local executor and
removal of the local engine.  As mentioned before, oslo.messaging includes
a fake driver that uses a simple queue.  An example in the use of this
fake driver is demonstrated in test_executor.  The use of the fake driver
requires that both the consumer and publisher of the queue is running in
the same process so the queue is in scope.  Currently, the launcher for
both the api/engine and the executor are launched on separate processes.

Here're the proposed changes.
1) Rewrite the launch script to be more generic which contains option to
launch all components (i.e. API, engine, executor) on the same process but
over separate threads or launch each individually.
2) Move transport to a global variables, similar to global _engine and then
shared by the different component.
3) Modified the engine and the executor to use a factory method to get the
global transport

This doesn't change how the workflows are being processed.  It just changed
how the services are launched.

Thoughts?
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-12 Thread W Chan
   - I can write a method in base test to start local executor.  I will do
   that as a separate bp.
   - After the engine is made standalone, the API will communicate to the
   engine and the engine to the executor via the oslo.messaging transport.
This means that for the local option, we need to start all three
   components (API, engine, and executor) on the same process.  If the long
   term goal as you stated above is to use separate launchers for these
   components, this means that the API launcher needs to duplicate all the
   logic to launch the engine and the executor. Hence, my proposal here is to
   move the logic to launch the components into a common module and either
   have a single generic launch script that launch specific components based
   on the CLI options or have separate launch scripts that reference the
   appropriate launch function from the common module.
   - The RPC client/server in oslo.messaging do not determine the
   transport.  The transport is determine via oslo.config and then given
   explicitly to the RPC client/server.
   
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
   
https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
examples for the client and server respectively.  The in process Queue
   is instantiated within this transport object from the fake driver.  For the
   local option, all three components need to share the same transport in
   order to have the Queue in scope. Thus, we will need some method to have
   this transport object visible to all three components and hence my proposal
   to use a global variable and a factory method.



On Tue, Mar 11, 2014 at 10:34 PM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 12 Mar 2014, at 06:37, W Chan m4d.co...@gmail.com wrote:

 Here're the proposed changes.
 1) Rewrite the launch script to be more generic which contains option to
 launch all components (i.e. API, engine, executor) on the same process but
 over separate threads or launch each individually.


 You mentioned test_executor.py so I think it would make sense first to
 refactor the code in there related with acquiring transport and launching
 executor. My suggestions are:

- In test base class (mistral.tests.base.BaseTest) create the new
method *start_local_executor()* that would deal with getting a fake
driver inside and all that stuff. This would be enough for tests where we
need to run engine and check something. start_local_executor() can be just
a part of setUp() method for such tests.
- As for the launch script I have the following thoughts:
   - Long-term launch scripts should be different for all API, engine
   and executor. Now API and engine start within the same process but it's
   just a temporary solution.
   - Launch script for engine (which is the same as API's for now)
   should have an option *--use-local-executor* to be able to run an
   executor along with engine itself within the same process.


 2) Move transport to a global variables, similar to global _engine and
 then shared by the different component.


 Not sure why we need it. Can you please explain more detailed here? The
 better way would be to initialize engine and executor with transport when
 we create them. If our current structure doesn't allow this easily we
 should discuss it and change it.

 In mistral.engine.engine.py we now have:

  def load_engine():
 global _engine
 module_name = cfg.CONF.engine.engine
 module = importutils.import_module(module_name)
 _engine = module.get_engine()

 As an option we could have the code that loads engine in engine launch
 script (once we decouple it from API process) so that when we call
 get_engine() we could pass in all needed configuration parameters like
 transport.

 3) Modified the engine and the executor to use a factory method to get the
 global transport


 If we made a decision on #2 we won't need it.


 A side note: when we discuss things like that I really miss DI container :)

 Renat Akhmerov
 @ Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread W Chan
On the transport variable, the problem I see isn't with passing the
variable to the engine and executor.  It's passing the transport into the
API layer.  The API layer is a pecan app and I currently don't see a way
where the transport variable can be passed to it directly.  I'm looking at
https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
 Do you have any suggestion?  Thanks.


On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:


- I can write a method in base test to start local executor.  I will
do that as a separate bp.

 Ok.


- After the engine is made standalone, the API will communicate to the
engine and the engine to the executor via the oslo.messaging transport.
 This means that for the local option, we need to start all three
components (API, engine, and executor) on the same process.  If the long
term goal as you stated above is to use separate launchers for these
components, this means that the API launcher needs to duplicate all the
logic to launch the engine and the executor. Hence, my proposal here is to
move the logic to launch the components into a common module and either
have a single generic launch script that launch specific components based
on the CLI options or have separate launch scripts that reference the
appropriate launch function from the common module.

 Ok, I see your point. Then I would suggest we have one script which we
 could use to run all the components (any subset of of them). So for those
 components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.


- The RPC client/server in oslo.messaging do not determine the
transport.  The transport is determine via oslo.config and then given
explicitly to the RPC client/server.

 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and

 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
  examples for the client and server respectively.  The in process Queue
is instantiated within this transport object from the fake driver.  For the
local option, all three components need to share the same transport in
order to have the Queue in scope. Thus, we will need some method to have
this transport object visible to all three components and hence my proposal
to use a global variable and a factory method.

 I'm still not sure I follow your point here.. Looking at the links you
 provided I see this:

 transport = messaging.get_transport(cfg.CONF)

 So my point here is we can make this call once in the launching script and
 pass it to engine/executor (and now API too if we want it to be launched by
 the same script). Of course, we'll have to change the way how we initialize
 these components, but I believe we can do it. So it's just a dependency
 injection. And in this case we wouldn't need to use a global variable. Am I
 still missing something?


 Renat Akhmerov
 @ Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-20 Thread W Chan
I submitted a rough draft for review @
https://review.openstack.org/#/c/81941/.  Instead of using the pecan hook,
I added a class property for the transport in the abstract engine class.
 On the pecan app setup, I passed the shared transport to the engine on
load.  Please provide feedback.  Thanks.


On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello
ryan.petre...@dreamhost.comwrote:

 Changing the configuration object at runtime is not thread-safe.  If you
 want to share objects with controllers, I'd suggest checking out Pecan's
 hook functionality.


 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

 e.g.,

 class SpecialContextHook(object):

 def __init__(self, some_obj):
 self.some_obj = some_obj

 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local
 webob.Request instance,
 # allowing you to access `pecan.request.context['foo']` in your
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python primitive,
 or an instance of some class
 state.request.context = {
 'foo': self.some_obj
 }

 ...

 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Take a look at method get_pecan_config() in mistral/api/app.py. It's
 where you can pass any parameters into pecan app (see a dictionary
 'cfg_dict' initialization). They can be then accessed via pecan.conf as
 described here:
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of
 api.py. May be you don't pass it but just instantiate it where you need it?
 Or I may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new
 oslo messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the
 variable to the engine and executor.  It's passing the transport into the
 API layer.  The API layer is a pecan app and I currently don't see a way
 where the transport variable can be passed to it directly.  I'm looking at
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
  Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov 
 rakhme...@mirantis.com wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 * I can write a method in base test to start local executor.  I
 will do that as a separate bp.
  Ok.
 
 * After the engine is made standalone, the API will communicate to
 the engine and the engine to the executor via the oslo.messaging transport.
  This means that for the local option, we need to start all three
 components (API, engine, and executor) on the same process.  If the long
 term goal as you stated above is to use separate launchers for these
 components, this means that the API launcher needs to duplicate all the
 logic to launch the engine and the executor. Hence, my proposal here is to
 move the logic to launch the components into a common module and either
 have a single generic launch script that launch specific components based
 on the CLI options or have separate launch scripts that reference the
 appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which we
 could use to run all the components (any subset of of them). So for those
 components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.
 
 * The RPC client/server in oslo.messaging do not determine the
 transport.  The transport is determine via oslo.config and then given
 explicitly to the RPC client/server.
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
  examples for the client and server respectively.  The in process Queue
 is instantiated within this transport object from the fake driver.  For the
 local option, all three components need to share the same transport in
 order to have the Queue in scope. Thus, we will need some method to have
 this transport object visible to all three components and hence my proposal
 to use a global variable and a factory method

Re: [openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-20 Thread W Chan
Congratulation to the new core reviewers!


On Wed, Mar 19, 2014 at 10:54 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Thanks, guys!

 Done.

 On 20 Mar 2014, at 02:28, Timur Nurlygayanov tnurlygaya...@mirantis.com
 wrote:

  Also, in the future, we can join Kirill Izotov to the core team too.

 Absolutely, once in a while we'll be reviewing everyone's progress and
 update the core team.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-20 Thread W Chan
Can the long running task be handled by putting the target task in the
workflow in a persisted state until either an event triggers it or timeout
occurs?  An event (human approval or trigger from an external system) sent
to the transport will rejuvenate the task.  The timeout is configurable by
the end user up to a certain time limit set by the mistral admin.

Based on the TaskFlow examples, it seems like the engine instance managing
the workflow will be in memory until the flow is completed.  Unless there's
other options to schedule tasks in TaskFlow, if we have too many of these
workflows with long running tasks, seems like it'll become a memory issue
for mistral...


On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine d...@stackstorm.com wrote:


 For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8;
 I'm still not sure why u would want to make is_sync/is_async a primitive
 concept in a workflow system, shouldn't this be only up to the entity
 running the workflow to decide? Why is a task allowed to be sync/async,
 that has major side-effects for state-persistence, resumption (and to me is
 a incorrect abstraction to provide) and general workflow execution control,
 I'd be very careful with this (which is why I am hesitant to add it without
 much much more discussion).


 Let's remove the confusion caused by async. All tasks [may] run async
 from the engine standpoint, agreed.

 Long running tasks - that's it.

 Examples: wait_5_days, run_hadoop_job, take_human_input.
 The Task doesn't do the job: it delegates to an external system. The flow
 execution needs to wait  (5 days passed, hadoob job finished with data x,
 user inputs y), and than continue with the received results.

 The requirement is to survive a restart of any WF component without
 loosing the state of the long running operation.

 Does TaskFlow already have a way to do it? Or ongoing ideas,
 considerations? If yes let's review. Else let's brainstorm together.

 I agree,

 that has major side-effects for state-persistence, resumption (and to me
 is a incorrect abstraction to provide) and general workflow execution
 control, I'd be very careful with this

 But these requirement  comes from customers'  use cases: wait_5_day -
 lifecycle management workflow, long running external system - Murano
 requirements, user input - workflow for operation automations with control
 gate checks, provisions which require 'approval' steps, etc.

 DZ


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-24 Thread W Chan
I have the following murano-ci failure for my last patch set.
https://murano-ci.mirantis.com/job/mistral_master_on_commit/194/  Since I
modified the API launch script in mistral, is that the cause of this
failure here?  Do I have to make changes to the tempest test?  Please
advise.  Thanks.


On Fri, Mar 21, 2014 at 3:20 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Alright, thanks Winson!

 Team, please review.

 Renat Akhmerov
 @ Mirantis Inc.



 On 21 Mar 2014, at 06:43, W Chan m4d.co...@gmail.com wrote:

 I submitted a rough draft for review @
 https://review.openstack.org/#/c/81941/.  Instead of using the pecan
 hook, I added a class property for the transport in the abstract engine
 class.  On the pecan app setup, I passed the shared transport to the engine
 on load.  Please provide feedback.  Thanks.


 On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello 
 ryan.petre...@dreamhost.com wrote:

 Changing the configuration object at runtime is not thread-safe.  If you
 want to share objects with controllers, I'd suggest checking out Pecan's
 hook functionality.


 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

 e.g.,

 class SpecialContextHook(object):

 def __init__(self, some_obj):
 self.some_obj = some_obj

 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local
 webob.Request instance,
 # allowing you to access `pecan.request.context['foo']` in your
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python
 primitive, or an instance of some class
 state.request.context = {
 'foo': self.some_obj
 }

 ...

 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Take a look at method get_pecan_config() in mistral/api/app.py. It's
 where you can pass any parameters into pecan app (see a dictionary
 'cfg_dict' initialization). They can be then accessed via pecan.conf as
 described here:
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of
 api.py. May be you don't pass it but just instantiate it where you need it?
 Or I may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new
 oslo messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the
 variable to the engine and executor.  It's passing the transport into the
 API layer.  The API layer is a pecan app and I currently don't see a way
 where the transport variable can be passed to it directly.  I'm looking at
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
  Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov 
 rakhme...@mirantis.com wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 * I can write a method in base test to start local executor.  I
 will do that as a separate bp.
  Ok.
 
 * After the engine is made standalone, the API will communicate
 to the engine and the engine to the executor via the oslo.messaging
 transport.  This means that for the local option, we need to start all
 three components (API, engine, and executor) on the same process.  If the
 long term goal as you stated above is to use separate launchers for these
 components, this means that the API launcher needs to duplicate all the
 logic to launch the engine and the executor. Hence, my proposal here is to
 move the logic to launch the components into a common module and either
 have a single generic launch script that launch specific components based
 on the CLI options or have separate launch scripts that reference the
 appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which
 we could use to run all the components (any subset of of them). So for
 those components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.
 
 * The RPC client/server in oslo.messaging do not determine the
 transport.  The transport is determine via oslo.config and then given
 explicitly to the RPC client/server.
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable

[openstack-dev] [Mistral] task update at end of handle_task in executor

2014-03-26 Thread W Chan
Regarding
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123,
should the status be set to SUCCESS instead of RUNNING?  If not, can
someone clarify why the task should remain RUNNING?

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] task update at end of handle_task in executor

2014-03-26 Thread W Chan
In addition, for sync tasks, it'll overwrite the task state from SUCCESS to
RUNNING.


On Wed, Mar 26, 2014 at 8:41 PM, Dmitri Zimine d...@stackstorm.com wrote:

 My understanding is: it's the engine which finalizes the task results,
 based on the status returned by the task via convey_task_result call.


 https://github.com/stackforge/mistral/blob/master/mistral/engine/abstract_engine.py#L82-L84

 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L44-L66

 In case of async tasks, executor keeps the task status at RUNNING, and a
 3rd party system will call convey_task_resutls on engine.


 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123
 ,


 This line however looks like a bug to me:  at best it doesnt do much and
 at worst it overwrites the ERROR previously set in here
 http://tinyurl.com/q5lps2h

 Nicolay, any better explanation?


 DZ

 On Mar 26, 2014, at 6:20 PM, W Chan m4d.co...@gmail.com wrote:

 Regarding
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123,
 should the status be set to SUCCESS instead of RUNNING?  If not, can
 someone clarify why the task should remain RUNNING?

 Thanks.
 Winson

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Refine engine - executor protocol

2014-06-12 Thread W Chan
Design proposal for blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol


   - Rename Executor to Worker.
   - Continue to use RPC server-client model in oslo.messaging for Engine
   and Worker protocol.
   - Use asynchronous call (cast) between Worker and Engine where
   appropriate.
   - Remove any DB access from Worker.  DB IO will only be done by Engine.
   - Worker updates Engine that it's going to start running action now.  If
   execution is not RUNNING and task is not IDLE, Engine tells Worker to halt
   at this point.  Worker cannot assume execution state is RUNNING and task
   state is IDLE because the handle_task message could have been sitting in
   the message queue for awhile.  This call between Worker and Engine is
   synchronous, meaning Worker will wait for a response from the
Engine.  Currently,
   Executor checks state and updates task state directly to the DB before
   running the action.
   - Worker communicates result (success or failure) to Engine.  Currently,
   Executor is inconsistent and calls Engine.convey_task_result on success and
   write directly to DB on failure.

Sequence


   1. Engine - Worker.handle_task
   2. Worker converts action spec to Action instance
   3. Worker - Engine.confirm_task_execution. Engine returns an exception
   if execution state is not RUNNING or task state is not IDLE.
   4. Worker runs action
   5. Worker - Engine.convey_task_result

Please provide feedback.

Thanks.
Winson




On Fri, Jun 6, 2014 at 9:12 AM, W Chan m4d.co...@gmail.com wrote:

 Renat,

 Regarding blueprint
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
 can you clarify what it means by worker parallelism and engine-executor 
 parallelism?
  Currently, the engine and executor are launched with the eventlet driver
 in oslo.messaging.  Once a message arrives over transport, a new green
 thread is spawned and passed to the dispatcher.  In the case of executor,
 the function being dispatched to is handle_task.  I'm unclear what
 additional parallelism this blueprint is referring to.  The context isn't
 clear from the summit notes.

 Thanks.
 Winson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-06-17 Thread W Chan
I figured.  I implemented it in https://review.openstack.org/#/c/97684/.


On Mon, Jun 16, 2014 at 9:35 PM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 I don’t think we have them. You can write them I think as a part of what
 you’re doing.

 Renat Akhmerov
 @ Mirantis Inc.



 On 31 May 2014, at 04:26, W Chan m4d.co...@gmail.com wrote:

 Is there an existing unit test for testing enabling keystone middleware in
 pecan (setting cfg.CONF.pecan.auth_enable = True)?  I don't seem to find
 one.  If there's one, it's not obvious.  Can someone kindly point me to it?


 On Wed, May 28, 2014 at 9:53 AM, W Chan m4d.co...@gmail.com wrote:

 Thanks for following up.  I will publish this change as a separate patch
 from my current config cleanup.


 On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently,
 Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108
 under a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-15 Thread W Chan
Currently, the various configurations are registered in
./mistral/config.py.  The configurations are registered when mistral.config
is referenced.  Given the way the code is written, PEP8 throws referenced
but not used error if mistral.config is referenced but not called in the
module.  In various use cases, this is avoided by using importutils to
import mistral.config (i.e.
https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/engine/test_transport.py#L34).
 I want to break down registration code in ./mistral/config.py into
separate functions for api, engine, db, etc and move the registration
closer to the module where the configuration is needed.  Any objections?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-16 Thread W Chan
Regarding config opts for keystone, the keystoneclient middleware already
registers the opts at
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325under
a keystone_authtoken group in the config file.  Currently, Mistral
registers the opts again at
https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108under
a different configuration group.  Should we remove the duplicate from
Mistral and refactor the reference to keystone configurations to the
keystone_authtoken group?  This seems more consistent.


On Thu, May 15, 2014 at 1:13 PM, W Chan m4d.co...@gmail.com wrote:

 Currently, the various configurations are registered in
 ./mistral/config.py.  The configurations are registered when mistral.config
 is referenced.  Given the way the code is written, PEP8 throws referenced
 but not used error if mistral.config is referenced but not called in the
 module.  In various use cases, this is avoided by using importutils to
 import mistral.config (i.e.
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/engine/test_transport.py#L34).
  I want to break down registration code in ./mistral/config.py into
 separate functions for api, engine, db, etc and move the registration
 closer to the module where the configuration is needed.  Any objections?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-28 Thread W Chan
Thanks for following up.  I will publish this change as a separate patch
from my current config cleanup.


On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently, Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108under 
 a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-30 Thread W Chan
Is there an existing unit test for testing enabling keystone middleware in
pecan (setting cfg.CONF.pecan.auth_enable = True)?  I don't seem to find
one.  If there's one, it's not obvious.  Can someone kindly point me to it?


On Wed, May 28, 2014 at 9:53 AM, W Chan m4d.co...@gmail.com wrote:

 Thanks for following up.  I will publish this change as a separate patch
 from my current config cleanup.


 On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently,
 Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108
 under a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Refine engine - executor protocol

2014-06-06 Thread W Chan
Renat,

Regarding blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
can you clarify what it means by worker parallelism and
engine-executor parallelism?
 Currently, the engine and executor are launched with the eventlet driver
in oslo.messaging.  Once a message arrives over transport, a new green
thread is spawned and passed to the dispatcher.  In the case of executor,
the function being dispatched to is handle_task.  I'm unclear what
additional parallelism this blueprint is referring to.  The context isn't
clear from the summit notes.

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Event Subscription

2014-11-11 Thread W Chan
Regarding blueprint to register event listeners to notify client
applications on state changes (
https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http),
I want to propose the following.

1. Refer this feature as event subscription instead of callback
2. Event subscription supports only HTTP web hooks with retry policy in
this first attempt
3. Event subscription can be defined for a list of specific events,
workflows, projects, domains, or any combinations (all if list is empty).
4. Decorator to publish event (similar to logging) and place decorators at
Engine and TaskPolicy methods @
https://github.com/stackforge/mistral/blob/master/mistral/engine1/base.py.
5. Events should be published to a queue and then processed by a worker so
not to disrupt actual workflow/task executions.
6. API controller to register event subscriber
a. New resource type named EventSubscriber
b. New REST controller named EventSubscribersController and CRUD
operations
7. DB v2 sqlalchemy model named EventSubscriber and appropriate CRUD methods
8. Operations in python-mistralclient to manage CRUD for subscribers.
9. Operation in python-mistralclient to re-publish events for a given
workflow execution (in case where client applications was downed and need
the data to recover).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-11-12 Thread W Chan
Nikolay,

You're right.  We will need to store the events in order to re-publish.
How about a separate Event model?  The events are written to the DB by the
same worker that publishes the event.  The retention policy for these
events is then managed by a config option.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-01 Thread W Chan
Renat,

Alternately, what do you think if mistral just post the events to given
exchange(s) on the same transport backend and let the subscribers decide
how to consume the events (i.e post to webhook, etc.) from these
exchanges?  This will simplify implementation somewhat.  The engine can
just take care of publishing the events to the exchanges and call it done.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-02 Thread W Chan
Renat,

I agree with the two methods you proposed.

On processing the events, I was thinking a separate entity.  But you gave
me an idea, how about a system action for publishing the events that the
current executors can run?

Alternately, instead of making HTTP calls, what do you think if mistral
just post the events to the exchange(s) that the subscribers provided and
let the subscribers decide how to consume the events (i.e post to webhook,
etc.) from these exchanges?  This will simplify implementation somewhat.
The engine can just take care of publishing the events to the exchanges and
call it done.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-08 Thread W Chan
Renat,

On sending events to an exchange, I mean an exchange on the transport
(i.e. rabbitMQ exchange
https://www.rabbitmq.com/tutorials/amqp-concepts.html).  On implementation
we can probably explore the notification feature in oslo.messaging.  But on
second thought, this would limit the consumers to trusted subsystems or
services though.  If we want the event consumers to be any 3rd party,
including untrusted, then maybe we should keep it as HTTP calls.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-08 Thread W Chan
Renat,

Is there any reason why Mistral do not pass action context such as workflow
ID, execution ID, task ID, and etc to all of the action executions?  I
think it makes a lot of sense for that information to be made available by
default.  The action can then decide what to do with the information. It
doesn't require a special signature in the __init__ method of the Action
classes.  What do you think?

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-09 Thread W Chan
Nikolay,

Regarding whether the execution environment BP is the same as this global
context BP, I think the difference is in the scope of the variables.  The
global context that I'm proposing is provided to the workflow at execution
and is only relevant to this execution.  For example, some contextual
information about this specific workflow execution (i.e. a reference to a
record in an external system related such as a service ticket ID or CMDB
record ID).  The values do not necessary carry across multiple executions.
But as I understand, the execution environment configuration is a set of
reusable configuration that can be shared across multiple workflow
executions.  The fact where action parameters are specified explicitly over
and over again is a common problem in the DSL.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-11 Thread W Chan
Renat,

Here's the blueprint.
https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context

I'm proposing to add *args and **kwargs to the __init__ methods of all
actions.  The action context can be passed as a dict in the kwargs. The
global context and the env context can be provided here as well.  Maybe put
all these different context under a kwarg called context?

For example,

ctx = {
env: {...},
global: {...},
runtime: {
execution_id: ...,
task_id: ...,
...
}
}

action = SomeMistralAction(context=ctx)

WDYT?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-12 Thread W Chan
Renat, Dmitri,

On supplying the global context into the workflow execution...

In addition to Renat's proposal, I have a few here.

1) Pass them implicitly in start_workflow as another kwargs in the
**params.  But on thinking, we should probably make global context
explicitly defined in the WF spec.  Passing them implicitly may be hard to
follow during troubleshooting where the value comes from by looking at the
WF spec.  Plus there will be cases where WF authors want it explicitly
defined. Still debating here...

inputs = {...}
globals = {...}
start_workflow('my_workflow', inputs, globals=globals)

2) Instead of adding to the WF spec, what if we change the scope in
existing input params?  For example, inputs defined in the top workflow by
default is visible to all subflows (pass down to workflow task on
run_workflow) and tasks (passed to action on execution).

3) Add to the WF spec

workflow:
type: direct
global:
- global1
- global2
input:
- input1
- input2

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] ActionProvider

2014-12-16 Thread W Chan
Renat,

We want to introduce the concept of an ActionProvider to Mistral.  We are
thinking that with an ActionProvider, a third party system can extend
Mistral with it's own action catalog and set of dedicated and specialized
action executors.  The ActionProvider will return it's own list of actions
via an abstract interface.  This minimizes the complexity and latency in
managing and sync'ing the Action table.  In the DSL, we can define provider
specific context/configuration separately and apply to all provider
specific actions without explicitly passing as inputs.  WDYT?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-23 Thread W Chan
After some online discussions with Renat, the following is a revision of
the proposal to address the following related blueprints.
*
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
* https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
*
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
* https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context

Please refer to the following threads for backgrounds.
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html
*
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html


*Workflow Context Scope*
1. context to workflow is passed to all its subflows and subtasks/actions
(aka children) only explicitly via inputs
2. context are passed by value (copy.deepcopy) to children
3. change to context is passed to parent only when it's explicitly
published at the end of the child execution
4. change to context at the parent (after a publish from a child) is passed
to subsequent children

*Environment Variables*
Solves the problem for quickly passing pre-defined inputs to a WF
execution.  In the WF spec, environment variables are referenced as
$.env.var1, $.env.var2, etc.  We should implement an API and DB model where
users can pre-defined different environments with their own set of
variables.  An environment can be passed either by name from the DB or
adhoc by dict in start_workflow.  On workflow execution, a copy of the
environment is saved with the execution object.  Action inputs are still
declared explicitly in the WF spec.  This does not solve the problem where
common inputs are specified over and over again.  So if there are multiple
SQL tasks in the WF, the WF author still needs to supply the conn_str
explicitly for each task.  In the example below, let's say we have a SQL
Query Action that takes a connection string and a query statement as
inputs.  The WF author can specify that the conn_str input is supplied from
the $.env.conn_str.

*Example:*

# Assume this SqlAction is registered as std.sql in Mistral's Action table.
class SqlAction(object):
def __init__(self, conn_str, query):
...

...

version: 2.0
workflows:
demo:
type: direct
input:
- query
output:
- records
tasks:
query:
action: std.sql conn_str={$.env.conn_str} query={$.query}
publish:
records: $

...

my_adhoc_env = {
conn_str: mysql://admin:secrete mysql://admin:secrete@@localhost
/test
}

...

# adhoc by dict
start_workflow(wf_name, wf_inputs, env=my_adhoc_env)

OR

# lookup by name from DB model
start_workflow(wf_name, wf_inputs, env=my_lab_env)


*Define Default Action Inputs as Environment Variables*
Solves the problem where we're specifying the same inputs to subflows and
subtasks/actions over and over again.  On command execution, if action
inputs are not explicitly supplied, then defaults will be lookup from the
environment.

*Example:*
Using the same example from above, the WF author can still supply both
conn_str and query inputs in the WF spec.  However, the author also has the
option to supply that as default action inputs.  An example environment
structure is below.  __actions should be reserved and immutable.  Users
can specific one or more default inputs for the sql action as nested dict
under __actions.  Recursive YAQL eval should be supported in the env
variables.

version: 2.0
workflows:
demo:
type: direct
input:
- query
output:
- records
tasks:
query:
action: std.sql query={$.query}
publish:
records: $

...

my_adhoc_env = {
sql_server: localhost,
__actions: {
std.sql: {
conn_str: mysql://admin:secrete@{$.env.sql_server}/test
 }
}
}


*Default Input Values Supplied Explicitly in WF Spec*
Please refer to this blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
for background.  This is a different use case.  To support, we just need to
set the correct order of precedence in applying values.
1. Input explicitly given to the sub flow/task in the WF spec
2. Default input supplied from env
3. Default input supplied at WF spec

*Putting this together...*
At runtime, the WF context would be similar to the following example.  This
will be use to recursively eval the inputs for subflow/tasks/actions.

ctx = {
   var1: …,
   var2: …,
   my_server_ip: 10.1.23.250,
   env: {
sql_server: localhost,
__actions: {
std.sql: {
conn: mysql://admin:secrete@{$.env.sql_server}/test
},
my.action: {
endpoint: http://{$.my_server_ip}/v1/foo;
}
}
   }
}

*Runtime Context*

Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread W Chan
Trying to clarify a few things...

* 2) Retuning to first example:
** ...
**  action: std.sql conn_str={$.env.conn_str} query={$.query}
** ...
** $.env - is it a name of environment or it will be a registered
syntax to getting access to values from env ?
*

I was actually thinking the environment will use the reserved word
env in the WF context.  The value for the env key will be the dict
supplied either DB lookup by name, by dict, or by JSON from CLI.

The nested dict for __actions (and all other keys with double
underscore) is special system purpose, in this case declaring defaults
for action inputs.  Similar to __execution where it's for containing
runtime data for the WF execution.

* 3) Can user has a few environments?*

I don't think we intend to mix one or more environments in a WF
execution.  The key was to supply any named environment at WF
execution time.  So the WF auth only needs to know the variables will
be under $.env.  If we allow one or more environments in a WF
execution, this means each environment needs to be referred to by name
(i.e. in your example env1 and env2).  We then would lost the ability
to swap any named environments for different executions of the same
WF.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-26 Thread W Chan
 What you’re saying is that whatever is under “$.env” is just the exact same 
 environment that we passed when we started the workflow? If yes then it 
 definitely makes sense to me (it just allows to explicitly access 
 environment, not through the implicit variable lookup). Please confirm.

Yes. the $.env that I original proposed would be the same dict as the one
supplied at start_workflow.  Although we have to agree whether the
variables in the environment are allowed to change after the WF started.
Unless there's a valid use case, I would lean toward making env immutable.

 One thing that I strongly suggest is that we clearly define all reserved
keys like “env”, “__actions” etc. I think it’d be better if they all
started with the same prefix, for example, double underscore.

Agree. How about using double underscore for env as well (i.e.
$.__env.var1, $.__env.var2)?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread W Chan
As a user of Mistral pretty regularly these days, I certainly prefers %
%.  I agree with the other comments on devops familiarity.  And looking
this from another angle, it's certainly easier to type % % then the other
options, especially if you have to do this over and over again.  LOL
Although, I am interested in the security concerns of this use in Jinja2.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Proposal for the Resume Feature

2015-03-26 Thread W Chan
We assume WF is in paused/errored state when 1) user manually pause the WF,
2) pause is specified on transition (on-condition(s) such as on-error), and
3) task errored.

The resume feature will support the following use cases.
1) User resumes WF from manual pause.
2) In the case of task failure, user fixed the problem manually outside of
Mistral, and user wants to re-run the failed task.
3) In the case of task failure, user fixed the problem manually outside of
Mistral, and user wants to resume from the next task.

Resuming from #1 should be straightforward.
Resuming from #2, user may want to change the inbound context.
Resuming from #3, users is required to manually provide the published vars
for the failed task(s).

In our offline discussion, there's ambiguity with on-error clause and
whether a task failure has already been addressed by the WF itself.  In
many cases, the on-error tasks may just be logging, email notification,
and/or other non-recovery procedures.  It's hard to determine that
automatically, so we let users decide where to resume the WF instead.
Mistral will let user resume a WF from specific point. The resume function
will determine the requirements needed to successfully resume.  If
requirements are not met, then resume returns an error saying what
requirements are missing.  In the case where there are failures in multiple
parallel branches, the requirements may include more than one tasks.  For
cases where user accidentally resume from an earlier task that is already
successfully completed, the resume function should detect that and throw an
exception.

Also, the current change to separate task from action execution should be
sufficient for traceability.

We also want to expose an endpoint to let users view context for a task.
This is to let user have a reference of the current task context to
determine the delta they need to change for a successful resume.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Propose Winson Chan m4dcoder for core team

2015-04-08 Thread W Chan
Mistral Team and Friends,

Thank you for giving me the opportunity to become core member of the
Mistral team.  I have an absolute blast developing and using Mistral. I'm
happy with the current progress and direction that Mistral is heading. I
look forward to many more collaborations and contributions.

Winson :D
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Proposing Lingxian Kong as a core reviewer

2015-06-22 Thread W Chan
+1

Lingxian, keep up with the good work. :D
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread W Chan
Here's the etherpad link.  I replied to the comments/feedbacks there.
Please feel free to continue the conversation there.
https://etherpad.openstack.org/p/mistral-resume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-15 Thread W Chan
I want to continue the discussion on the workflow resume feature.


Resuming from our last conversation @
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.html.
I don't think we should limit how users resume. There may be different
possible scenarios. User can fix the environment or condition that led to
the failure of the current task and the user wants to just re-run the
failed task.  Or user can actually fix the environment/condition which
include fixing what the task was doing, then just want to continue the next
set of task(s).


The following is a list of proposed changes.



   1. A new CLI operation to resume WF (i.e. mistral workflow-resume).
  1. If no additional info is provided, assume this WF is manually
  paused and there are no task/action execution errors. The WF state is
  updated to RUNNING. Update using the put method @
ExecutionsController. The
  put method checks that there's no task/action execution errors.
  2. If WF is in an error state
 1. To resume from failed task, the workflow-resume command
 requires the WF execution ID, task name, and/or task input.
 2. To resume from failed with-items task
1. Re-run the entire task (re-run all items) requires WF
execution ID, task name and/or task input.
2. Re-run a single item requires WF execution ID, task name,
with-items index, and/or task input for the item.
3. Re-run selected items requires WF execution ID, task name,
with-items indices, and/or task input for each items.
 3. To resume from the next task(s), the workflow-resume command
 requires the WF execution ID, failed task name, output for
the failed task,
 and a flag to skip the failed task.



   1. Make ERROR - RUNNING as valid state transition @ is_valid_transition
   function.



   1. Add a comments field to Execution model. Add a note that indicates
   the execution is launched by workflow-resume. Auto-populated in this case.



   1. Resume from failed task.
  1. Re-run task with the same task inputs  POST new action execution
  for the task execution @ ActionExecutionsController
  2. Re-run task with different task inputs  POST new action
  execution for the task execution, allowed for different input @
  ActionExecutionsController



   1. Resume from next task(s).
  1. Inject a noop task execution or noop action execution (undecided
  yet) for the failed task with appropriate output.  The spec is an adhoc
  spec that copies conditions from the failed task. This provides
some audit
  functionality and should trigger the next set of task executions (in case
  of branching and such).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-15 Thread W Chan
Resending to see if this fixes the formatting for outlines below.


I want to continue the discussion on the workflow resume feature.


Resuming from our last conversation @
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.html.
I don't think we should limit how users resume. There may be different
possible scenarios. User can fix the environment or condition that led to
the failure of the current task and the user wants to just re-run the
failed task.  Or user can actually fix the environment/condition which
include fixing what the task was doing, then just want to continue the next
set of task(s).


The following is a list of proposed changes.


1. A new CLI operation to resume WF (i.e. mistral workflow-resume).

A. If no additional info is provided, assume this WF is manually paused
and there are no task/action execution errors. The WF state is updated to
RUNNING. Update using the put method @ ExecutionsController. The put method
checks that there's no task/action execution errors.

B. If WF is in an error state

i. To resume from failed task, the workflow-resume command requires
the WF execution ID, task name, and/or task input.

ii. To resume from failed with-items task

a. Re-run the entire task (re-run all items) requires WF
execution ID, task name and/or task input.

b. Re-run a single item requires WF execution ID, task name,
with-items index, and/or task input for the item.

c. Re-run selected items requires WF execution ID, task name,
with-items indices, and/or task input for each items.

- To resume from the next task(s), the workflow-resume
command requires the WF execution ID, failed task name, output for the
failed task, and a flag to skip the failed task.


2. Make ERROR - RUNNING as valid state transition @ is_valid_transition
function.


3. Add a comments field to Execution model. Add a note that indicates the
execution is launched by workflow-resume. Auto-populated in this case.


4. Resume from failed task.

A. Re-run task with the same task inputs  POST new action execution
for the task execution @ ActionExecutionsController

B. Re-run task with different task inputs  POST new action execution
for the task execution, allowed for different input @
ActionExecutionsController


5. Resume from next task(s).

A. Inject a noop task execution or noop action execution (undecided
yet) for the failed task with appropriate output.  The spec is an adhoc
spec that copies conditions from the failed task. This provides some audit
functionality and should trigger the next set of task executions (in case
of branching and such).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev