On Wed, Aug 24, 2011 at 02:46:11AM -0700, Girardin Yoanne wrote:
>
> Working on a grid architecture spread across 9 geographical sites (and
> counting), we (Lucas and me) would like to know if it's possible to
> make a "self-modifying workflow".

Hello,

a "self-modifying workflow" or a "self-modifying distributed workflow system" ? 
Workflow is most oftenly used as a synounym for a "business process", but the 
rest of your email seem to indicate you want to have a master engine and 
sub-engines and run processes that spread potentially all of those engines.


> Let me explain : a server on each site is a frontend to the clusters.
> Each clusters contains a number of "nodes". On the frontend, you have
> a set of software to reserve nodes (i.e computers) and deploy your
> working environment on it.
>
> Quick example :
>
> |------------|
> |frontend|------- cluster1 (50 nodes)
> |------------|\
>                \---- cluster2 (50 nodes)
>
> Since we do not know in advance in how many sites we will launch our
> experiments, nor how many engine we would need to have, we'd like to
> know if it is possible to create engines and register them in the
> master engine dynamically.

Since they are [engine] participants in the master engine, then yes.

Nothing prevents one from registering participant (or removing participants) 
after an engine started (let's say after it starting enacting its first 
workflow instance / business process instance.


> What I've done so far :
>
> <------------------------------------>
> Fetch the number of sites (done with a web API)
> foreach site do
>       master.register_participant( slave-x, ..... )
>       Open a slave.rb template, modify it with sed command and copy it to
> the site frontend
> end
>
> pdef = Ruote.define_process do
>       #As many subprocess as needed, each of one them launched on the
> correct slave engine
> end
> <------------------------------------>
>
> All of that with the redis storage on the master frontend.

As written previously, there should be one Redis storage per "engine". Engines 
can share the same Redis "instance", but you have to make sure they use 
different databases in that Redis (else you end up with a unique, big, engine).


> We could also think of more slaves being "deployed" on nodes for any
> experiment purpose.
>
> frontend-master ---> frontend-slave-1 ---> node-slave-1
>                        \---> frontend-slave-2 ---> node-slave-1 [...]

In the case of one slave engine and multiple slave engines, I'd recommend 
finding a way for slaves to register themselves in the engine.

One way to do that would be for slave engines would be to launch a process on 
the main engine that contain a "registerp" expressions.

  http://ruote.rubyforge.org/exp/registerp.html

That could get hairy.

Or you could devise a small [web] service on the master engine that listens for 
registration requests.

---8<---
require 'yajl'
require 'rufus-json/automatic'
require 'sinatra'

post '/registration' do

  data = Rufus::Json.decode(request.body.read)

  RuoteEngine.register_participant(
    data['slave_name'],
    'storage_class' => Ruote::Redis::Storage,
    'storage_args' => {
      'host' => data['slave_host'],
      'db' => data['slave_db'],
      'thread_safe' => true
    })

  "ok, thanks"
end
--->8---

I don't know what is your solution for slaves that go out of service.

What could be fun is to develop a participant that you register once and then, 
behind the scenes, the participant manages the list of slaves. You could 
register that participant under

  register /^slave_/, Yoanne::SlaveParticipant, 'some' => 'args'

and then find a way to inform the engine about the list of slaves...


OK, have a nice evening,

--
John Mettraux - http://lambda.io/processi

-- 
you received this message because you are subscribed to the "ruote users" group.
to post : send email to [email protected]
to unsubscribe : send email to [email protected]
more options : http://groups.google.com/group/openwferu-users?hl=en

Reply via email to