Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-24 Thread W Chan
I have the following murano-ci failure for my last patch set.
https://murano-ci.mirantis.com/job/mistral_master_on_commit/194/  Since I
modified the API launch script in mistral, is that the cause of this
failure here?  Do I have to make changes to the tempest test?  Please
advise.  Thanks.


On Fri, Mar 21, 2014 at 3:20 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Alright, thanks Winson!

 Team, please review.

 Renat Akhmerov
 @ Mirantis Inc.



 On 21 Mar 2014, at 06:43, W Chan m4d.co...@gmail.com wrote:

 I submitted a rough draft for review @
 https://review.openstack.org/#/c/81941/.  Instead of using the pecan
 hook, I added a class property for the transport in the abstract engine
 class.  On the pecan app setup, I passed the shared transport to the engine
 on load.  Please provide feedback.  Thanks.


 On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello 
 ryan.petre...@dreamhost.com wrote:

 Changing the configuration object at runtime is not thread-safe.  If you
 want to share objects with controllers, I'd suggest checking out Pecan's
 hook functionality.


 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

 e.g.,

 class SpecialContextHook(object):

 def __init__(self, some_obj):
 self.some_obj = some_obj

 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local
 webob.Request instance,
 # allowing you to access `pecan.request.context['foo']` in your
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python
 primitive, or an instance of some class
 state.request.context = {
 'foo': self.some_obj
 }

 ...

 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Take a look at method get_pecan_config() in mistral/api/app.py. It's
 where you can pass any parameters into pecan app (see a dictionary
 'cfg_dict' initialization). They can be then accessed via pecan.conf as
 described here:
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of
 api.py. May be you don't pass it but just instantiate it where you need it?
 Or I may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new
 oslo messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the
 variable to the engine and executor.  It's passing the transport into the
 API layer.  The API layer is a pecan app and I currently don't see a way
 where the transport variable can be passed to it directly.  I'm looking at
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
  Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov 
 rakhme...@mirantis.com wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 * I can write a method in base test to start local executor.  I
 will do that as a separate bp.
  Ok.
 
 * After the engine is made standalone, the API will communicate
 to the engine and the engine to the executor via the oslo.messaging
 transport.  This means that for the local option, we need to start all
 three components (API, engine, and executor) on the same process.  If the
 long term goal as you stated above is to use separate launchers for these
 components, this means that the API launcher needs to duplicate all the
 logic to launch the engine and the executor. Hence, my proposal here is to
 move the logic to launch the components into a common module and either
 have a single generic launch script that launch specific components based
 on the CLI options or have separate launch scripts that reference the
 appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which
 we could use to run all the components (any subset of of them). So for
 those components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.
 
 * The RPC client/server in oslo.messaging do not determine the
 transport.  The transport is determine via oslo.config and then given
 explicitly to the RPC client/server.
 

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-21 Thread Renat Akhmerov
Alright, thanks Winson!

Team, please review.

Renat Akhmerov
@ Mirantis Inc.



On 21 Mar 2014, at 06:43, W Chan m4d.co...@gmail.com wrote:

 I submitted a rough draft for review @ 
 https://review.openstack.org/#/c/81941/.  Instead of using the pecan hook, I 
 added a class property for the transport in the abstract engine class.  On 
 the pecan app setup, I passed the shared transport to the engine on load.  
 Please provide feedback.  Thanks.
 
 
 On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello ryan.petre...@dreamhost.com 
 wrote:
 Changing the configuration object at runtime is not thread-safe.  If you want 
 to share objects with controllers, I’d suggest checking out Pecan’s hook 
 functionality.
 
 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook
 
 e.g.,
 
 class SpecialContextHook(object):
 
 def __init__(self, some_obj):
 self.some_obj = some_obj
 
 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local 
 webob.Request instance,
 # allowing you to access `pecan.request.context[‘foo’]` in your 
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python primitive, or 
 an instance of some class
 state.request.context = {
 ‘foo’: self.some_obj
 }
 
 ...
 
 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )
 
 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 
 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
  Take a look at method get_pecan_config() in mistral/api/app.py. It’s where 
  you can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
  initialization). They can be then accessed via pecan.conf as described 
  here: 
  http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
   If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of api.py. 
  May be you don't pass it but just instantiate it where you need it? Or I 
  may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new oslo 
  messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the 
  variable to the engine and executor.  It's passing the transport into the 
  API layer.  The API layer is a pecan app and I currently don't see a way 
  where the transport variable can be passed to it directly.  I'm looking 
  at 
  https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 
  and 
  https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44. 
   Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
  wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 • I can write a method in base test to start local executor.  I will 
  do that as a separate bp.
  Ok.
 
 • After the engine is made standalone, the API will communicate to 
  the engine and the engine to the executor via the oslo.messaging 
  transport.  This means that for the local option, we need to start all 
  three components (API, engine, and executor) on the same process.  If 
  the long term goal as you stated above is to use separate launchers for 
  these components, this means that the API launcher needs to duplicate 
  all the logic to launch the engine and the executor. Hence, my proposal 
  here is to move the logic to launch the components into a common module 
  and either have a single generic launch script that launch specific 
  components based on the CLI options or have separate launch scripts that 
  reference the appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which we 
  could use to run all the components (any subset of of them). So for those 
  components we specified when launching the script we use this local 
  transport. Btw, scheduler eventually should become a standalone component 
  too, so we have 4 components.
 
 • The RPC client/server in oslo.messaging do not determine the 
  transport.  The transport is determine via oslo.config and then given 
  explicitly to the RPC client/server.  
  https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
   and 
  https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
   are examples for the client and server respectively.  The in process 
  Queue is instantiated within this transport object from the fake driver. 
   For the local option, all 

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-20 Thread W Chan
I submitted a rough draft for review @
https://review.openstack.org/#/c/81941/.  Instead of using the pecan hook,
I added a class property for the transport in the abstract engine class.
 On the pecan app setup, I passed the shared transport to the engine on
load.  Please provide feedback.  Thanks.


On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello
ryan.petre...@dreamhost.comwrote:

 Changing the configuration object at runtime is not thread-safe.  If you
 want to share objects with controllers, I'd suggest checking out Pecan's
 hook functionality.


 http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

 e.g.,

 class SpecialContextHook(object):

 def __init__(self, some_obj):
 self.some_obj = some_obj

 def before(self, state):
 # In any pecan controller, `pecan.request` is a thread-local
 webob.Request instance,
 # allowing you to access `pecan.request.context['foo']` in your
 controllers.  In this example,
 # self.some_obj could be just about anything - a Python primitive,
 or an instance of some class
 state.request.context = {
 'foo': self.some_obj
 }

 ...

 wsgi_app = pecan.Pecan(
 my_package.controllers.root.RootController(),
 hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
 )

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com

 On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Take a look at method get_pecan_config() in mistral/api/app.py. It's
 where you can pass any parameters into pecan app (see a dictionary
 'cfg_dict' initialization). They can be then accessed via pecan.conf as
 described here:
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
  We have access to all configuration parameters in the context of
 api.py. May be you don't pass it but just instantiate it where you need it?
 Or I may misunderstand what you're trying to do...
 
  DZ
 
  PS: can you generate and update mistral.config.example to include new
 oslo messaging options? I forgot to mention it on review on time.
 
 
  On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
  On the transport variable, the problem I see isn't with passing the
 variable to the engine and executor.  It's passing the transport into the
 API layer.  The API layer is a pecan app and I currently don't see a way
 where the transport variable can be passed to it directly.  I'm looking at
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
  Do you have any suggestion?  Thanks.
 
 
  On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov 
 rakhme...@mirantis.com wrote:
 
  On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 * I can write a method in base test to start local executor.  I
 will do that as a separate bp.
  Ok.
 
 * After the engine is made standalone, the API will communicate to
 the engine and the engine to the executor via the oslo.messaging transport.
  This means that for the local option, we need to start all three
 components (API, engine, and executor) on the same process.  If the long
 term goal as you stated above is to use separate launchers for these
 components, this means that the API launcher needs to duplicate all the
 logic to launch the engine and the executor. Hence, my proposal here is to
 move the logic to launch the components into a common module and either
 have a single generic launch script that launch specific components based
 on the CLI options or have separate launch scripts that reference the
 appropriate launch function from the common module.
  Ok, I see your point. Then I would suggest we have one script which we
 could use to run all the components (any subset of of them). So for those
 components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.
 
 * The RPC client/server in oslo.messaging do not determine the
 transport.  The transport is determine via oslo.config and then given
 explicitly to the RPC client/server.
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
  examples for the client and server respectively.  The in process Queue
 is instantiated within this transport object from the fake driver.  For the
 local option, all three components need to share the same transport in
 order to have the Queue in scope. Thus, we will need some method to have
 this transport object visible to all three components and hence my proposal
 to use a global variable and a factory method.
  

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-17 Thread Ryan Petrello
Changing the configuration object at runtime is not thread-safe.  If you want 
to share objects with controllers, I’d suggest checking out Pecan’s hook 
functionality.

http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

e.g.,

class SpecialContextHook(object):

def __init__(self, some_obj):
self.some_obj = some_obj

def before(self, state):
# In any pecan controller, `pecan.request` is a thread-local 
webob.Request instance,
# allowing you to access `pecan.request.context[‘foo’]` in your 
controllers.  In this example,
# self.some_obj could be just about anything - a Python primitive, or 
an instance of some class
state.request.context = {
‘foo’: self.some_obj
}

...

wsgi_app = pecan.Pecan(
my_package.controllers.root.RootController(),
hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
)

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Take a look at method get_pecan_config() in mistral/api/app.py. It’s where 
 you can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
 initialization). They can be then accessed via pecan.conf as described here: 
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
  If I understood the problem correctly this should be helpful.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
 We have access to all configuration parameters in the context of api.py. May 
 be you don't pass it but just instantiate it where you need it? Or I may 
 misunderstand what you're trying to do...
 
 DZ 
 
 PS: can you generate and update mistral.config.example to include new oslo 
 messaging options? I forgot to mention it on review on time. 
 
 
 On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
 On the transport variable, the problem I see isn't with passing the 
 variable to the engine and executor.  It's passing the transport into the 
 API layer.  The API layer is a pecan app and I currently don't see a way 
 where the transport variable can be passed to it directly.  I'm looking at 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 
 and 
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  
 Do you have any suggestion?  Thanks. 
 
 
 On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
• I can write a method in base test to start local executor.  I will do 
 that as a separate bp.  
 Ok.
 
• After the engine is made standalone, the API will communicate to the 
 engine and the engine to the executor via the oslo.messaging transport.  
 This means that for the local option, we need to start all three 
 components (API, engine, and executor) on the same process.  If the long 
 term goal as you stated above is to use separate launchers for these 
 components, this means that the API launcher needs to duplicate all the 
 logic to launch the engine and the executor. Hence, my proposal here is to 
 move the logic to launch the components into a common module and either 
 have a single generic launch script that launch specific components based 
 on the CLI options or have separate launch scripts that reference the 
 appropriate launch function from the common module.
 Ok, I see your point. Then I would suggest we have one script which we 
 could use to run all the components (any subset of of them). So for those 
 components we specified when launching the script we use this local 
 transport. Btw, scheduler eventually should become a standalone component 
 too, so we have 4 components.
 
• The RPC client/server in oslo.messaging do not determine the 
 transport.  The transport is determine via oslo.config and then given 
 explicitly to the RPC client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process 
 Queue is instantiated within this transport object from the fake driver.  
 For the local option, all three components need to share the same 
 transport in order to have the Queue in scope. Thus, we will need some 
 method to have this transport object visible to all three components and 
 hence my proposal to use a global variable and a factory method. 
 I’m still not sure I follow your point here.. Looking at the links you 
 provided I see this:
 
 transport = messaging.get_transport(cfg.CONF)
 
 So my point here is we can make this call once in the launching script and 
 pass it to engine/executor (and now API too if we want it to be launched by 
 the same script). Of course, we’ll have to change the way how we initialize 

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-14 Thread Renat Akhmerov
Take a look at method get_pecan_config() in mistral/api/app.py. It’s where you 
can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
initialization). They can be then accessed via pecan.conf as described here: 
http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.

Renat Akhmerov
@ Mirantis Inc.



On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:

 We have access to all configuration parameters in the context of api.py. May 
 be you don't pass it but just instantiate it where you need it? Or I may 
 misunderstand what you're trying to do...
 
 DZ 
 
 PS: can you generate and update mistral.config.example to include new oslo 
 messaging options? I forgot to mention it on review on time. 
 
 
 On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
 On the transport variable, the problem I see isn't with passing the variable 
 to the engine and executor.  It's passing the transport into the API layer.  
 The API layer is a pecan app and I currently don't see a way where the 
 transport variable can be passed to it directly.  I'm looking at 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 and 
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  
 Do you have any suggestion?  Thanks. 
 
 
 On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 I can write a method in base test to start local executor.  I will do that 
 as a separate bp.  
 Ok.
 
 After the engine is made standalone, the API will communicate to the engine 
 and the engine to the executor via the oslo.messaging transport.  This 
 means that for the local option, we need to start all three components 
 (API, engine, and executor) on the same process.  If the long term goal as 
 you stated above is to use separate launchers for these components, this 
 means that the API launcher needs to duplicate all the logic to launch the 
 engine and the executor. Hence, my proposal here is to move the logic to 
 launch the components into a common module and either have a single generic 
 launch script that launch specific components based on the CLI options or 
 have separate launch scripts that reference the appropriate launch function 
 from the common module.
 
 Ok, I see your point. Then I would suggest we have one script which we could 
 use to run all the components (any subset of of them). So for those 
 components we specified when launching the script we use this local 
 transport. Btw, scheduler eventually should become a standalone component 
 too, so we have 4 components.
 
 The RPC client/server in oslo.messaging do not determine the transport.  
 The transport is determine via oslo.config and then given explicitly to the 
 RPC client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process Queue 
 is instantiated within this transport object from the fake driver.  For the 
 local option, all three components need to share the same transport in 
 order to have the Queue in scope. Thus, we will need some method to have 
 this transport object visible to all three components and hence my proposal 
 to use a global variable and a factory method. 
 I’m still not sure I follow your point here.. Looking at the links you 
 provided I see this:
 
 transport = messaging.get_transport(cfg.CONF)
 
 So my point here is we can make this call once in the launching script and 
 pass it to engine/executor (and now API too if we want it to be launched by 
 the same script). Of course, we’ll have to change the way how we initialize 
 these components, but I believe we can do it. So it’s just a dependency 
 injection. And in this case we wouldn’t need to use a global variable. Am I 
 still missing something?
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread Renat Akhmerov

On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:

 I can write a method in base test to start local executor.  I will do that as 
 a separate bp.  
Ok.
 After the engine is made standalone, the API will communicate to the engine 
 and the engine to the executor via the oslo.messaging transport.  This means 
 that for the local option, we need to start all three components (API, 
 engine, and executor) on the same process.  If the long term goal as you 
 stated above is to use separate launchers for these components, this means 
 that the API launcher needs to duplicate all the logic to launch the engine 
 and the executor. Hence, my proposal here is to move the logic to launch the 
 components into a common module and either have a single generic launch 
 script that launch specific components based on the CLI options or have 
 separate launch scripts that reference the appropriate launch function from 
 the common module.
Ok, I see your point. Then I would suggest we have one script which we could 
use to run all the components (any subset of of them). So for those components 
we specified when launching the script we use this local transport. Btw, 
scheduler eventually should become a standalone component too, so we have 4 
components.

 The RPC client/server in oslo.messaging do not determine the transport.  The 
 transport is determine via oslo.config and then given explicitly to the RPC 
 client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process Queue 
 is instantiated within this transport object from the fake driver.  For the 
 local option, all three components need to share the same transport in 
 order to have the Queue in scope. Thus, we will need some method to have this 
 transport object visible to all three components and hence my proposal to use 
 a global variable and a factory method. 
I’m still not sure I follow your point here.. Looking at the links you provided 
I see this:

transport = messaging.get_transport(cfg.CONF)

So my point here is we can make this call once in the launching script and pass 
it to engine/executor (and now API too if we want it to be launched by the same 
script). Of course, we’ll have to change the way how we initialize these 
components, but I believe we can do it. So it’s just a dependency injection. 
And in this case we wouldn’t need to use a global variable. Am I still missing 
something?


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread W Chan
On the transport variable, the problem I see isn't with passing the
variable to the engine and executor.  It's passing the transport into the
API layer.  The API layer is a pecan app and I currently don't see a way
where the transport variable can be passed to it directly.  I'm looking at
https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
 Do you have any suggestion?  Thanks.


On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:


- I can write a method in base test to start local executor.  I will
do that as a separate bp.

 Ok.


- After the engine is made standalone, the API will communicate to the
engine and the engine to the executor via the oslo.messaging transport.
 This means that for the local option, we need to start all three
components (API, engine, and executor) on the same process.  If the long
term goal as you stated above is to use separate launchers for these
components, this means that the API launcher needs to duplicate all the
logic to launch the engine and the executor. Hence, my proposal here is to
move the logic to launch the components into a common module and either
have a single generic launch script that launch specific components based
on the CLI options or have separate launch scripts that reference the
appropriate launch function from the common module.

 Ok, I see your point. Then I would suggest we have one script which we
 could use to run all the components (any subset of of them). So for those
 components we specified when launching the script we use this local
 transport. Btw, scheduler eventually should become a standalone component
 too, so we have 4 components.


- The RPC client/server in oslo.messaging do not determine the
transport.  The transport is determine via oslo.config and then given
explicitly to the RPC client/server.

 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and

 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
  examples for the client and server respectively.  The in process Queue
is instantiated within this transport object from the fake driver.  For the
local option, all three components need to share the same transport in
order to have the Queue in scope. Thus, we will need some method to have
this transport object visible to all three components and hence my proposal
to use a global variable and a factory method.

 I'm still not sure I follow your point here.. Looking at the links you
 provided I see this:

 transport = messaging.get_transport(cfg.CONF)

 So my point here is we can make this call once in the launching script and
 pass it to engine/executor (and now API too if we want it to be launched by
 the same script). Of course, we'll have to change the way how we initialize
 these components, but I believe we can do it. So it's just a dependency
 injection. And in this case we wouldn't need to use a global variable. Am I
 still missing something?


 Renat Akhmerov
 @ Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread Dmitri Zimine
We have access to all configuration parameters in the context of api.py. May be 
you don't pass it but just instantiate it where you need it? Or I may 
misunderstand what you're trying to do...

DZ 

PS: can you generate and update mistral.config.example to include new oslo 
messaging options? I forgot to mention it on review on time. 


On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:

 On the transport variable, the problem I see isn't with passing the variable 
 to the engine and executor.  It's passing the transport into the API layer.  
 The API layer is a pecan app and I currently don't see a way where the 
 transport variable can be passed to it directly.  I'm looking at 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 and 
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  Do 
 you have any suggestion?  Thanks. 
 
 
 On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 I can write a method in base test to start local executor.  I will do that 
 as a separate bp.  
 Ok.
 
 After the engine is made standalone, the API will communicate to the engine 
 and the engine to the executor via the oslo.messaging transport.  This means 
 that for the local option, we need to start all three components (API, 
 engine, and executor) on the same process.  If the long term goal as you 
 stated above is to use separate launchers for these components, this means 
 that the API launcher needs to duplicate all the logic to launch the engine 
 and the executor. Hence, my proposal here is to move the logic to launch the 
 components into a common module and either have a single generic launch 
 script that launch specific components based on the CLI options or have 
 separate launch scripts that reference the appropriate launch function from 
 the common module.
 
 Ok, I see your point. Then I would suggest we have one script which we could 
 use to run all the components (any subset of of them). So for those 
 components we specified when launching the script we use this local 
 transport. Btw, scheduler eventually should become a standalone component 
 too, so we have 4 components.
 
 The RPC client/server in oslo.messaging do not determine the transport.  The 
 transport is determine via oslo.config and then given explicitly to the RPC 
 client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process Queue 
 is instantiated within this transport object from the fake driver.  For the 
 local option, all three components need to share the same transport in 
 order to have the Queue in scope. Thus, we will need some method to have 
 this transport object visible to all three components and hence my proposal 
 to use a global variable and a factory method. 
 I’m still not sure I follow your point here.. Looking at the links you 
 provided I see this:
 
 transport = messaging.get_transport(cfg.CONF)
 
 So my point here is we can make this call once in the launching script and 
 pass it to engine/executor (and now API too if we want it to be launched by 
 the same script). Of course, we’ll have to change the way how we initialize 
 these components, but I believe we can do it. So it’s just a dependency 
 injection. And in this case we wouldn’t need to use a global variable. Am I 
 still missing something?
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-12 Thread W Chan
   - I can write a method in base test to start local executor.  I will do
   that as a separate bp.
   - After the engine is made standalone, the API will communicate to the
   engine and the engine to the executor via the oslo.messaging transport.
This means that for the local option, we need to start all three
   components (API, engine, and executor) on the same process.  If the long
   term goal as you stated above is to use separate launchers for these
   components, this means that the API launcher needs to duplicate all the
   logic to launch the engine and the executor. Hence, my proposal here is to
   move the logic to launch the components into a common module and either
   have a single generic launch script that launch specific components based
   on the CLI options or have separate launch scripts that reference the
   appropriate launch function from the common module.
   - The RPC client/server in oslo.messaging do not determine the
   transport.  The transport is determine via oslo.config and then given
   explicitly to the RPC client/server.
   
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
   
https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
examples for the client and server respectively.  The in process Queue
   is instantiated within this transport object from the fake driver.  For the
   local option, all three components need to share the same transport in
   order to have the Queue in scope. Thus, we will need some method to have
   this transport object visible to all three components and hence my proposal
   to use a global variable and a factory method.



On Tue, Mar 11, 2014 at 10:34 PM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 12 Mar 2014, at 06:37, W Chan m4d.co...@gmail.com wrote:

 Here're the proposed changes.
 1) Rewrite the launch script to be more generic which contains option to
 launch all components (i.e. API, engine, executor) on the same process but
 over separate threads or launch each individually.


 You mentioned test_executor.py so I think it would make sense first to
 refactor the code in there related with acquiring transport and launching
 executor. My suggestions are:

- In test base class (mistral.tests.base.BaseTest) create the new
method *start_local_executor()* that would deal with getting a fake
driver inside and all that stuff. This would be enough for tests where we
need to run engine and check something. start_local_executor() can be just
a part of setUp() method for such tests.
- As for the launch script I have the following thoughts:
   - Long-term launch scripts should be different for all API, engine
   and executor. Now API and engine start within the same process but it's
   just a temporary solution.
   - Launch script for engine (which is the same as API's for now)
   should have an option *--use-local-executor* to be able to run an
   executor along with engine itself within the same process.


 2) Move transport to a global variables, similar to global _engine and
 then shared by the different component.


 Not sure why we need it. Can you please explain more detailed here? The
 better way would be to initialize engine and executor with transport when
 we create them. If our current structure doesn't allow this easily we
 should discuss it and change it.

 In mistral.engine.engine.py we now have:

  def load_engine():
 global _engine
 module_name = cfg.CONF.engine.engine
 module = importutils.import_module(module_name)
 _engine = module.get_engine()

 As an option we could have the code that loads engine in engine launch
 script (once we decouple it from API process) so that when we call
 get_engine() we could pass in all needed configuration parameters like
 transport.

 3) Modified the engine and the executor to use a factory method to get the
 global transport


 If we made a decision on #2 we won't need it.


 A side note: when we discuss things like that I really miss DI container :)

 Renat Akhmerov
 @ Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread W Chan
I want to propose the following changes to implement the local executor and
removal of the local engine.  As mentioned before, oslo.messaging includes
a fake driver that uses a simple queue.  An example in the use of this
fake driver is demonstrated in test_executor.  The use of the fake driver
requires that both the consumer and publisher of the queue is running in
the same process so the queue is in scope.  Currently, the launcher for
both the api/engine and the executor are launched on separate processes.

Here're the proposed changes.
1) Rewrite the launch script to be more generic which contains option to
launch all components (i.e. API, engine, executor) on the same process but
over separate threads or launch each individually.
2) Move transport to a global variables, similar to global _engine and then
shared by the different component.
3) Modified the engine and the executor to use a factory method to get the
global transport

This doesn't change how the workflows are being processed.  It just changed
how the services are launched.

Thoughts?
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread Renat Akhmerov

On 12 Mar 2014, at 06:37, W Chan m4d.co...@gmail.com wrote:

 Here're the proposed changes.
 1) Rewrite the launch script to be more generic which contains option to 
 launch all components (i.e. API, engine, executor) on the same process but 
 over separate threads or launch each individually.

You mentioned test_executor.py so I think it would make sense first to refactor 
the code in there related with acquiring transport and launching executor. My 
suggestions are:
In test base class (mistral.tests.base.BaseTest) create the new method 
start_local_executor() that would deal with getting a fake driver inside and 
all that stuff. This would be enough for tests where we need to run engine and 
check something. start_local_executor() can be just a part of setUp() method 
for such tests.
As for the launch script I have the following thoughts:
Long-term launch scripts should be different for all API, engine and executor. 
Now API and engine start within the same process but it’s just a temporary 
solution.
Launch script for engine (which is the same as API’s for now) should have an 
option --use-local-executor to be able to run an executor along with engine 
itself within the same process.

 2) Move transport to a global variables, similar to global _engine and then 
 shared by the different component.

Not sure why we need it. Can you please explain more detailed here? The better 
way would be to initialize engine and executor with transport when we create 
them. If our current structure doesn’t allow this easily we should discuss it 
and change it.

In mistral.engine.engine.py we now have:

 def load_engine():
global _engine
module_name = cfg.CONF.engine.engine
module = importutils.import_module(module_name)
_engine = module.get_engine()

As an option we could have the code that loads engine in engine launch script 
(once we decouple it from API process) so that when we call get_engine() we 
could pass in all needed configuration parameters like transport.

 3) Modified the engine and the executor to use a factory method to get the 
 global transport

If we made a decision on #2 we won’t need it.


A side note: when we discuss things like that I really miss DI container :)

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-25 Thread W Chan
Thanks.  I will do that today and follow up with a description of the
proposal.


On Mon, Feb 24, 2014 at 10:21 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 In process is fine to me.

 Winson, please register a blueprint for this change and put the link in
 here so that everyone can see what it all means exactly. My feeling is that
 we can approve and get it done pretty soon.

 Renat Akhmerov
 @ Mirantis Inc.



 On 25 Feb 2014, at 12:40, Dmitri Zimine d...@stackstorm.com wrote:

  I agree with Winson's points. Inline.
 
  On Feb 24, 2014, at 8:31 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:
 
 
  On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
 
  As I understand, the local engine runs the task immediately whereas
 the scalable engine sends it over the message queue to one or more
 executors.
 
  Correct.
 
  Note: that local is confusing here, in process will reflect what it
 is doing better.
 
 
  In what circumstances would we see a Mistral user using a local engine
 (other than testing) instead of the scalable engine?
 
  Yes, mostly testing we it could be used for demonstration purposes also
 or in the environments where installing RabbitMQ is not desirable.
 
  If we are keeping the local engine, can we move the abstraction to the
 executor instead, having drivers for a local executor and remote executor?
  The message flow from the engine to the executor would be consistent, it's
 just where the request will be processed.
 
  I think I get the idea and it sounds good to me. We could really have
 executor in both cases but the transport from engine to executor can be
 different. Is that what you're suggesting? And what do you call driver here?
 
  +1 to abstraction to the executor, indeed the local and remote engines
 today differ only by how they invoke executor, e.g. transport / driver.
 
 
  And since we are porting to oslo.messaging, there's already a fake
 driver that allows for an in process Queue for local execution.  The local
 executor can be a derivative of that fake driver for non-testing purposes.
  And if we don't want to use an in process queue here to avoid the
 complexity, we can have the client side module of the executor determine
 whether to dispatch to a local executor vs. RPC call to a remote executor.
 
  Yes, that sounds interesting. Could you please write up some etherpad
 with details explaining your idea?
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread W Chan
As I understand, the local engine runs the task immediately whereas the
scalable engine sends it over the message queue to one or more executors.

In what circumstances would we see a Mistral user using a local engine
(other than testing) instead of the scalable engine?

If we are keeping the local engine, can we move the abstraction to the
executor instead, having drivers for a local executor and remote executor?
 The message flow from the engine to the executor would be consistent, it's
just where the request will be processed.

And since we are porting to oslo.messaging, there's already a fake driver
that allows for an in process Queue for local execution.  The local
executor can be a derivative of that fake driver for non-testing purposes.
 And if we don't want to use an in process queue here to avoid the
complexity, we can have the client side module of the executor determine
whether to dispatch to a local executor vs. RPC call to a remote executor.

Thoughts?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Renat Akhmerov

On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:

 As I understand, the local engine runs the task immediately whereas the 
 scalable engine sends it over the message queue to one or more executors.  

Correct.

 In what circumstances would we see a Mistral user using a local engine (other 
 than testing) instead of the scalable engine?

Yes, mostly testing we it could be used for demonstration purposes also or in 
the environments where installing RabbitMQ is not desirable.

 If we are keeping the local engine, can we move the abstraction to the 
 executor instead, having drivers for a local executor and remote executor?  
 The message flow from the engine to the executor would be consistent, it's 
 just where the request will be processed.  

I think I get the idea and it sounds good to me. We could really have executor 
in both cases but the transport from engine to executor can be different. Is 
that what you’re suggesting? And what do you call driver here?

 And since we are porting to oslo.messaging, there's already a fake driver 
 that allows for an in process Queue for local execution.  The local executor 
 can be a derivative of that fake driver for non-testing purposes.  And if we 
 don't want to use an in process queue here to avoid the complexity, we can 
 have the client side module of the executor determine whether to dispatch to 
 a local executor vs. RPC call to a remote executor.

Yes, that sounds interesting. Could you please write up some etherpad with 
details explaining your idea?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Dmitri Zimine
I agree with Winson's points. Inline.

On Feb 24, 2014, at 8:31 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 
 On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
 
 As I understand, the local engine runs the task immediately whereas the 
 scalable engine sends it over the message queue to one or more executors.  
 
 Correct.

Note: that local is confusing here, in process will reflect what it is 
doing better. 

 
 In what circumstances would we see a Mistral user using a local engine 
 (other than testing) instead of the scalable engine?
 
 Yes, mostly testing we it could be used for demonstration purposes also or in 
 the environments where installing RabbitMQ is not desirable.
 
 If we are keeping the local engine, can we move the abstraction to the 
 executor instead, having drivers for a local executor and remote executor?  
 The message flow from the engine to the executor would be consistent, it's 
 just where the request will be processed.  
 
 I think I get the idea and it sounds good to me. We could really have 
 executor in both cases but the transport from engine to executor can be 
 different. Is that what you’re suggesting? And what do you call driver here?

+1 to abstraction to the executor, indeed the local and remote engines today 
differ only by how they invoke executor, e.g. transport / driver.

 
 And since we are porting to oslo.messaging, there's already a fake driver 
 that allows for an in process Queue for local execution.  The local executor 
 can be a derivative of that fake driver for non-testing purposes.  And if we 
 don't want to use an in process queue here to avoid the complexity, we can 
 have the client side module of the executor determine whether to dispatch to 
 a local executor vs. RPC call to a remote executor.
 
 Yes, that sounds interesting. Could you please write up some etherpad with 
 details explaining your idea?
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Renat Akhmerov
“In process” is fine to me.

Winson, please register a blueprint for this change and put the link in here so 
that everyone can see what it all means exactly. My feeling is that we can 
approve and get it done pretty soon.

Renat Akhmerov
@ Mirantis Inc.



On 25 Feb 2014, at 12:40, Dmitri Zimine d...@stackstorm.com wrote:

 I agree with Winson's points. Inline.
 
 On Feb 24, 2014, at 8:31 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 
 On 25 Feb 2014, at 07:12, W Chan m4d.co...@gmail.com wrote:
 
 As I understand, the local engine runs the task immediately whereas the 
 scalable engine sends it over the message queue to one or more executors.  
 
 Correct.
 
 Note: that local is confusing here, in process will reflect what it is 
 doing better. 
 
 
 In what circumstances would we see a Mistral user using a local engine 
 (other than testing) instead of the scalable engine?
 
 Yes, mostly testing we it could be used for demonstration purposes also or 
 in the environments where installing RabbitMQ is not desirable.
 
 If we are keeping the local engine, can we move the abstraction to the 
 executor instead, having drivers for a local executor and remote executor?  
 The message flow from the engine to the executor would be consistent, it's 
 just where the request will be processed.  
 
 I think I get the idea and it sounds good to me. We could really have 
 executor in both cases but the transport from engine to executor can be 
 different. Is that what you’re suggesting? And what do you call driver here?
 
 +1 to abstraction to the executor, indeed the local and remote engines 
 today differ only by how they invoke executor, e.g. transport / driver.
 
 
 And since we are porting to oslo.messaging, there's already a fake driver 
 that allows for an in process Queue for local execution.  The local 
 executor can be a derivative of that fake driver for non-testing purposes.  
 And if we don't want to use an in process queue here to avoid the 
 complexity, we can have the client side module of the executor determine 
 whether to dispatch to a local executor vs. RPC call to a remote executor.
 
 Yes, that sounds interesting. Could you please write up some etherpad with 
 details explaining your idea?
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev