+1
Lingxian, keep up with the good work. :D
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Here's the etherpad link. I replied to the comments/feedbacks there.
Please feel free to continue the conversation there.
https://etherpad.openstack.org/p/mistral-resume
__
OpenStack Development Mailing List (not for usage
I want to continue the discussion on the workflow resume feature.
Resuming from our last conversation @
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.html.
I don't think we should limit how users resume. There may be different
possible scenarios. User can fix the
Resending to see if this fixes the formatting for outlines below.
I want to continue the discussion on the workflow resume feature.
Resuming from our last conversation @
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.html.
I don't think we should limit how users resume.
Mistral Team and Friends,
Thank you for giving me the opportunity to become core member of the
Mistral team. I have an absolute blast developing and using Mistral. I'm
happy with the current progress and direction that Mistral is heading. I
look forward to many more collaborations and
We assume WF is in paused/errored state when 1) user manually pause the WF,
2) pause is specified on transition (on-condition(s) such as on-error), and
3) task errored.
The resume feature will support the following use cases.
1) User resumes WF from manual pause.
2) In the case of task failure,
As a user of Mistral pretty regularly these days, I certainly prefers %
%. I agree with the other comments on devops familiarity. And looking
this from another angle, it's certainly easier to type % % then the other
options, especially if you have to do this over and over again. LOL
Although, I
What you’re saying is that whatever is under “$.env” is just the exact same
environment that we passed when we started the workflow? If yes then it
definitely makes sense to me (it just allows to explicitly access
environment, not through the implicit variable lookup). Please confirm.
Yes.
Trying to clarify a few things...
* 2) Retuning to first example:
** ...
** action: std.sql conn_str={$.env.conn_str} query={$.query}
** ...
** $.env - is it a name of environment or it will be a registered
syntax to getting access to values from env ?
*
I was actually thinking the environment
After some online discussions with Renat, the following is a revision of
the proposal to address the following related blueprints.
*
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
* https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
*
Renat,
We want to introduce the concept of an ActionProvider to Mistral. We are
thinking that with an ActionProvider, a third party system can extend
Mistral with it's own action catalog and set of dedicated and specialized
action executors. The ActionProvider will return it's own list of
Renat, Dmitri,
On supplying the global context into the workflow execution...
In addition to Renat's proposal, I have a few here.
1) Pass them implicitly in start_workflow as another kwargs in the
**params. But on thinking, we should probably make global context
explicitly defined in the WF
Renat,
Here's the blueprint.
https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context
I'm proposing to add *args and **kwargs to the __init__ methods of all
actions. The action context can be passed as a dict in the kwargs. The
global context and the env context can be provided
Nikolay,
Regarding whether the execution environment BP is the same as this global
context BP, I think the difference is in the scope of the variables. The
global context that I'm proposing is provided to the workflow at execution
and is only relevant to this execution. For example, some
Renat,
On sending events to an exchange, I mean an exchange on the transport
(i.e. rabbitMQ exchange
https://www.rabbitmq.com/tutorials/amqp-concepts.html). On implementation
we can probably explore the notification feature in oslo.messaging. But on
second thought, this would limit the
Renat,
Is there any reason why Mistral do not pass action context such as workflow
ID, execution ID, task ID, and etc to all of the action executions? I
think it makes a lot of sense for that information to be made available by
default. The action can then decide what to do with the
Renat,
I agree with the two methods you proposed.
On processing the events, I was thinking a separate entity. But you gave
me an idea, how about a system action for publishing the events that the
current executors can run?
Alternately, instead of making HTTP calls, what do you think if mistral
Renat,
Alternately, what do you think if mistral just post the events to given
exchange(s) on the same transport backend and let the subscribers decide
how to consume the events (i.e post to webhook, etc.) from these
exchanges? This will simplify implementation somewhat. The engine can
just
Nikolay,
You're right. We will need to store the events in order to re-publish.
How about a separate Event model? The events are written to the DB by the
same worker that publishes the event. The retention policy for these
events is then managed by a config option.
Winson
Regarding blueprint to register event listeners to notify client
applications on state changes (
https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http),
I want to propose the following.
1. Refer this feature as event subscription instead of callback
2. Event subscription
Akhmerov
@ Mirantis Inc.
On 28 Aug 2014, at 06:17, W Chan m4d.co...@gmail.com wrote:
Renat,
It will be helpful to perform a callback on completion of the async
workflow. Can we add on-finish to the workflow spec and when workflow
completes, runs task(s) defined in the on-finish section
Renat,
It will be helpful to perform a callback on completion of the async
workflow. Can we add on-finish to the workflow spec and when workflow
completes, runs task(s) defined in the on-finish section of the spec? This
will allow the workflow author to define how the callback is to be done.
, at 04:26, W Chan m4d.co...@gmail.com wrote:
Is there an existing unit test for testing enabling keystone middleware in
pecan (setting cfg.CONF.pecan.auth_enable = True)? I don't seem to find
one. If there's one, it's not obvious. Can someone kindly point me to it?
On Wed, May 28, 2014 at 9
On Fri, Jun 6, 2014 at 9:12 AM, W Chan m4d.co...@gmail.com wrote:
Renat,
Regarding blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
can you clarify what it means by worker parallelism and engine-executor
parallelism?
Currently, the engine
Renat,
Regarding blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
can you clarify what it means by worker parallelism and
engine-executor parallelism?
Currently, the engine and executor are launched with the eventlet driver
in oslo.messaging. Once a
Is there an existing unit test for testing enabling keystone middleware in
pecan (setting cfg.CONF.pecan.auth_enable = True)? I don't seem to find
one. If there's one, it's not obvious. Can someone kindly point me to it?
On Wed, May 28, 2014 at 9:53 AM, W Chan m4d.co...@gmail.com wrote
-
Hash: SHA1
On 17/05/14 02:48, W Chan wrote:
Regarding config opts for keystone, the keystoneclient middleware
already
registers the opts at
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
under a keystone_authtoken group
at 1:13 PM, W Chan m4d.co...@gmail.com wrote:
Currently, the various configurations are registered in
./mistral/config.py. The configurations are registered when mistral.config
is referenced. Given the way the code is written, PEP8 throws referenced
but not used error if mistral.config
Currently, the various configurations are registered in
./mistral/config.py. The configurations are registered when mistral.config
is referenced. Given the way the code is written, PEP8 throws referenced
but not used error if mistral.config is referenced but not called in the
module. In various
Regarding
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123,
should the status be set to SUCCESS instead of RUNNING? If not, can
someone clarify why the task should remain RUNNING?
Thanks.
Winson
___
/q5lps2h
Nicolay, any better explanation?
DZ
On Mar 26, 2014, at 6:20 PM, W Chan m4d.co...@gmail.com wrote:
Regarding
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123,
should the status be set to SUCCESS instead of RUNNING? If not, can
someone
.
On Fri, Mar 21, 2014 at 3:20 AM, Renat Akhmerov rakhme...@mirantis.comwrote:
Alright, thanks Winson!
Team, please review.
Renat Akhmerov
@ Mirantis Inc.
On 21 Mar 2014, at 06:43, W Chan m4d.co...@gmail.com wrote:
I submitted a rough draft for review @
https://review.openstack.org/#/c/81941
it?
Or I may misunderstand what you're trying to do...
DZ
PS: can you generate and update mistral.config.example to include new
oslo messaging options? I forgot to mention it on review on time.
On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
On the transport variable
Congratulation to the new core reviewers!
On Wed, Mar 19, 2014 at 10:54 PM, Renat Akhmerov rakhme...@mirantis.comwrote:
Thanks, guys!
Done.
On 20 Mar 2014, at 02:28, Timur Nurlygayanov tnurlygaya...@mirantis.com
wrote:
Also, in the future, we can join Kirill Izotov to the core team
Can the long running task be handled by putting the target task in the
workflow in a persisted state until either an event triggers it or timeout
occurs? An event (human approval or trigger from an external system) sent
to the transport will rejuvenate the task. The timeout is configurable by
://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
Do you have any suggestion? Thanks.
On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.comwrote:
On 13 Mar 2014, at 10:40, W Chan
:37, W Chan m4d.co...@gmail.com wrote:
Here're the proposed changes.
1) Rewrite the launch script to be more generic which contains option to
launch all components (i.e. API, engine, executor) on the same process but
over separate threads or launch each individually.
You mentioned
I want to propose the following changes to implement the local executor and
removal of the local engine. As mentioned before, oslo.messaging includes
a fake driver that uses a simple queue. An example in the use of this
fake driver is demonstrated in test_executor. The use of the fake driver
Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to
oslo.messaging
Looks good. Thanks, Winson!
Renat, What do you think?
On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote
RabbitMQ?
Renat Akhmerov
@ Mirantis Inc.
On 26 Feb 2014, at 14:30, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:
Looks good. Thanks, Winson!
Renat, What do you think?
On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:
The following link is the google doc
...@mirantis.com
wrote:
On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
As I understand, the local engine runs the task immediately whereas
the scalable engine sends it over the message queue to one or more
executors.
Correct.
Note: that local is confusing here, in process
through other executor/engine communications.
We talked about executor communicating to engine over 3 channels (DB,
REST, RabbitMQ) which I wasn't happy about ;) and put it off for some time.
May be it can be rationalized as part of your design.
DZ.
On Feb 24, 2014, at 11:21 AM, W Chan m4d.co
PM, Renat Akhmerov rakhme...@mirantis.comwrote:
On 25 Feb 2014, at 02:21, W Chan m4d.co...@gmail.com wrote:
Renat,
Regarding your comments on change https://review.openstack.org/#/c/75609/,
I don't think the port to oslo.messaging is just a swap from pika to
oslo.messaging. OpenStack
Renat,
Regarding your comments on change https://review.openstack.org/#/c/75609/,
I don't think the port to oslo.messaging is just a swap from pika to
oslo.messaging. OpenStack services as I understand is usually implemented
as an RPC client/server over a messaging transport. Sync vs async
As I understand, the local engine runs the task immediately whereas the
scalable engine sends it over the message queue to one or more executors.
In what circumstances would we see a Mistral user using a local engine
(other than testing) instead of the scalable engine?
If we are keeping the
Will Mistral be supporting custom actions developed by users? If so,
should the Actions module be refactored to individual plugins with a
dynamic process for action type mapping/lookup?
Thanks.
Winson
___
OpenStack-dev mailing list
46 matches
Mail list logo