On Thu, Apr 24, 2008 at 3:32 AM, Nick Petrella <[EMAIL PROTECTED]> wrote:
>
>  Just wondering what everyone's thoughts are on the best way to scale
>  the engine across multiple machines/processes to help handle lots of
>  load.
>
>
>  This was mentioned in a previous post, but was kinda drowned out by
>  the rest of the conversation.  What is the best way to delegate
>  workitems to different running engines? Using socket/listener pairs
>  and how would that be implemented? Does there need to be entry engine
>  that is responsible for receiving workitems and delegating them to the
>  various running engines?

Hello Nick,

I hope others will chime in with their experiences and their ideas.

(warning, this an half-baked answer)

One of the reasons for which I switched from Java to Ruby is that it's
so easy to try things out in Ruby that I was sure I could build
something and then let the others do the last mile, the way they want.
My usual example for that for with SOAP webservices, there is no real
generic SoapParticipant or SoapExpression in OpenWFEru, it is rather
easy to take a Ruby code sample and wrap it in a participant (with
test cases around it).

Now, I'd like OpenWFEru (ruote) to be true to that "it's easy to try"
spirit of Ruby and I think it should be easy to just gem install
openwferu and run it from a small Ruby script (after that the screams
of "hey, how do I integrate it into Rails" can be heard).

A step further for me is : "openwferu is rather cheap to run (in terms
of configuration and resources)" (the extreme would be "run 1 instance
of the OpenWFEru engine for each business process" (and the Erlang
guys would add "and let it crash"). I like this "one for one" extreme,
I think it suits well OpenWFEru because it's open source, no
CPU-license barrier. Prototypes as well as Production instances should
be as cheap as possible (sudo gem install -y openwferu).

So I haven't really implemented things for scalability and
distributability but I certainly have prepared some things.

--- restful detour ---

These days, I'm preparing "ruote-rest", the sequel to "kisha", you can
browse it at http://github.com/jmettraux/ruote-rest
Launching a process on ruote-rest amounts to POSTing a workitem to it.

Ruote-rest is a Ruote - Sinatra pair, it's a RESTful workflow / BPM
engine (Kisha was using Rails). For now it only 'speaks' XML, but I'll
add JSON and maybe AtomPub later.

Why this restful detour ? Because I'm thinking that a bunch of
ruote-rest with a proxy in front of them would be an interesting
setting. The proxy needs not be a Rufus engine.

You can also imagine cases where one engine POSTs a
workitem/launchitem to another.

A ruote-rest is a web resource.

--- / restful detour ---

I can see two scenarii : "I want to run that process on any available
engine" and "This specific process should be run on that specific
engine" (which are not mutually exclusive).


OpenWFE java had the concept of the "participant map" where engines
were registered as well. I didn't brought that concept back into
OpenWFEru (yet). I am thinking more in terms of OK, why not have an
HttpPostParticipant or PostExpression for firing workitems to another
engine ?

Somehow, some piece of this cluster/distributed puzzle are missing
because you guys are asking for them now, they weren't really demanded
before that.


I need to know the needs of the community for this. I'll be glad to
help, by explaining and preparing OpenWFEru for your scalability
and/or distributability needs.

Nick, if you want I'd be glad in helping you prepare the socket
listen/dispatch pair for letting an engine launch processes on
another.


Cheers,

-- 
John Mettraux - http://jmettraux.wordpress.com

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"OpenWFEru users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/openwferu-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to